Search Results for author: Ian Tenney

Found 16 papers, 6 papers with code

Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs

no code implementations14 Mar 2023 Kelvin Guu, Albert Webson, Ellie Pavlick, Lucas Dixon, Ian Tenney, Tolga Bolukbasi

To study such interactions, we propose Simfluence, a new paradigm for TDA where the goal is not to produce a single influence score per example, but instead a training run simulator: the user asks, ``If my model had trained on example $z_1$, then $z_2$, ..., then $z_n$, how would it behave on $z_{test}$?

counterfactual Language Modelling +1

Towards Tracing Factual Knowledge in Language Models Back to the Training Data

1 code implementation23 May 2022 Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu

In this paper, we propose the problem of fact tracing: identifying which training examples taught an LM to generate a particular factual assertion.

Information Retrieval Retrieval

Retrieval-guided Counterfactual Generation for QA

no code implementations ACL 2022 Bhargavi Paranjape, Matthew Lamm, Ian Tenney

To address these challenges, we develop a Retrieve-Generate-Filter(RGF) technique to create counterfactual evaluation and training data with minimal human supervision.

counterfactual Data Augmentation +6

Do Language Embeddings Capture Scales?

no code implementations EMNLP (BlackboxNLP) 2020 Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, Dan Roth

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge.

Common Sense Reasoning

What Happens To BERT Embeddings During Fine-tuning?

no code implementations EMNLP (BlackboxNLP) 2020 Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, Ian Tenney

While there has been much recent work studying how linguistic information is encoded in pre-trained sentence representations, comparatively little is understood about how these models change when adapted to solve downstream tasks.

Dependency Parsing

Asking without Telling: Exploring Latent Ontologies in Contextual Representations

no code implementations EMNLP 2020 Julian Michael, Jan A. Botha, Ian Tenney

The success of pretrained contextual encoders, such as ELMo and BERT, has brought a great deal of interest in what these models learn: do they, without explicit supervision, learn to encode meaningful notions of linguistic structure?

BERT Rediscovers the Classical NLP Pipeline

1 code implementation ACL 2019 Ian Tenney, Dipanjan Das, Ellie Pavlick

Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks.

NER POS

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.