Search Results for author: Ian Tenney

Found 14 papers, 5 papers with code

Retrieval-guided Counterfactual Generation for QA

no code implementations ACL 2022 Bhargavi Paranjape, Matthew Lamm, Ian Tenney

To address these challenges, we develop a Retrieve-Generate-Filter(RGF) technique to create counterfactual evaluation and training data with minimal human supervision.

Data Augmentation Question Answering +2

What Happens To BERT Embeddings During Fine-tuning?

no code implementations EMNLP (BlackboxNLP) 2020 Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, Ian Tenney

While there has been much recent work studying how linguistic information is encoded in pre-trained sentence representations, comparatively little is understood about how these models change when adapted to solve downstream tasks.

Dependency Parsing

Asking without Telling: Exploring Latent Ontologies in Contextual Representations

no code implementations EMNLP 2020 Julian Michael, Jan A. Botha, Ian Tenney

The success of pretrained contextual encoders, such as ELMo and BERT, has brought a great deal of interest in what these models learn: do they, without explicit supervision, learn to encode meaningful notions of linguistic structure?

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.