Search Results for author: Ellie Pavlick

Found 55 papers, 14 papers with code

Transferring Representations of Logical Connectives

no code implementations ACL (NALOMA, IWCS) 2021 Aaron Traylor, Ellie Pavlick, Roman Feiman

In modern natural language processing pipelines, it is common practice to “pretrain” a generative language model on a large corpus of text, and then to “finetune” the created representations by continuing to train them on a discriminative textual inference task.

Language Modelling

Do Vision-Language Pretrained Models Learn Primitive Concepts?

no code implementations31 Mar 2022 Tian Yun, Usha Bhalla, Ellie Pavlick, Chen Sun

Our study reveals that state-of-the-art VL pretrained models learn primitive concepts that are highly useful as visual descriptors, as demonstrated by their strong performance on fine-grained visual recognition tasks, but those concepts struggle to provide interpretable compositional derivations, which highlights limitations of existing VL models.

Fine-Grained Visual Recognition Zero-Shot Learning

A Novel Corpus of Discourse Structure in Humans and Computers

1 code implementation10 Nov 2021 Babak Hemmatian, Sheridan Feucht, Rachel Avram, Alexander Wey, Muskaan Garg, Kate Spitalnic, Carsten Eickhoff, Ellie Pavlick, Bjorn Sandstede, Steven Sloman

We present a novel corpus of 445 human- and computer-generated documents, comprising about 27, 000 clauses, annotated for semantic clause types and coherence relations that allow for nuanced comparison of artificial and natural discourse modes.

Text Generation

Mapping Language Models to Grounded Conceptual Spaces

no code implementations ICLR 2022 Roma Patel, Ellie Pavlick

A fundamental criticism of text-only language models (LMs) is their lack of grounding---that is, the ability to tie a word for which they have learned a representation, to its actual use in the world.

Does Vision-and-Language Pretraining Improve Lexical Grounding?

1 code implementation Findings (EMNLP) 2021 Tian Yun, Chen Sun, Ellie Pavlick

Linguistic representations derived from text alone have been criticized for their lack of grounding, i. e., connecting words to their meanings in the physical world.

Question Answering Visual Question Answering

Frequency Effects on Syntactic Rule Learning in Transformers

1 code implementation EMNLP 2021 Jason Wei, Dan Garrette, Tal Linzen, Ellie Pavlick

Pre-trained language models perform well on a variety of linguistic tasks that require symbolic reasoning, raising the question of whether such models implicitly represent abstract symbols and rules.

Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color

no code implementations CoNLL (EMNLP) 2021 Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, Anders Søgaard

Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases -- (Paris, Capital, France).

Pretrained Language Models

Do Prompt-Based Models Really Understand the Meaning of their Prompts?

1 code implementation2 Sep 2021 Albert Webson, Ellie Pavlick

We find that models learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively "good" prompts.

Few-Shot Learning Natural Language Inference

AND does not mean OR: Using Formal Languages to Study Language Models' Representations

no code implementations ACL 2021 Aaron Traylor, Roman Feiman, Ellie Pavlick

A current open question in natural language processing is to what extent language models, which are trained with access only to the form of language, are able to capture the meaning of language.

Language Modelling

Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering

no code implementations ACL 2021 Najoung Kim, Ellie Pavlick, Burcu Karagol Ayan, Deepak Ramachandran

Through a user preference study, we demonstrate that the oracle behavior of our proposed system that provides responses based on presupposition failure is preferred over the oracle behavior of existing QA systems.

Explanation Generation Question Answering

Information-theoretic Probing Explains Reliance on Spurious Features

no code implementations ICLR 2021 Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick

In this work, we test the hypothesis that the extent to which a feature influences a model's decisions can be predicted using a combination of two factors: The feature's "extractability" after pre-training (measured using information-theoretic probing techniques), and the "evidence" available during fine-tuning (defined as the feature's co-occurrence rate with the label).

Spatial Language Understanding for Object Search in Partially Observed City-scale Environments

1 code implementation4 Dec 2020 Kaiyu Zheng, Deniz Bayazit, Rebecca Mathew, Ellie Pavlick, Stefanie Tellex

We propose SLOOP (Spatial Language Object-Oriented POMDP), a new framework for partially observable decision making with a probabilistic observation model for spatial language.

Decision Making

Self-play for Data Efficient Language Acquisition

no code implementations10 Oct 2020 Charles Lovering, Ellie Pavlick

When communicating, people behave consistently across conversational roles: People understand the words they say and are able to produce the words they hear.

Language Acquisition

Interpretability and Analysis in Neural NLP

no code implementations ACL 2020 Yonatan Belinkov, Sebastian Gehrmann, Ellie Pavlick

While deep learning has transformed the natural language processing (NLP) field and impacted the larger computational linguistics community, the rise of neural networks is stained by their opaque nature: It is challenging to interpret the inner workings of neural network models, and explicate their behavior.

Robot Object Retrieval with Contextual Natural Language Queries

1 code implementation23 Jun 2020 Thao Nguyen, Nakul Gopalan, Roma Patel, Matt Corsaro, Ellie Pavlick, Stefanie Tellex

The model takes in a language command containing a verb, for example "Hand me something to cut," and RGB images of candidate objects and selects the object that best satisfies the task specified by the verb.

Does Data Augmentation Improve Generalization in NLP?

no code implementations30 Apr 2020 Rohan Jha, Charles Lovering, Ellie Pavlick

Neural models often exploit superficial features to achieve good performance, rather than deriving more general features.

Data Augmentation Fairness +1

What Happens To BERT Embeddings During Fine-tuning?

no code implementations EMNLP (BlackboxNLP) 2020 Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, Ian Tenney

While there has been much recent work studying how linguistic information is encoded in pre-trained sentence representations, comparatively little is understood about how these models change when adapted to solve downstream tasks.

Dependency Parsing

How well do NLI models capture verb veridicality?

no code implementations IJCNLP 2019 Alexis Ross, Ellie Pavlick

In natural language inference (NLI), contexts are considered veridical if they allow us to infer that their underlying propositions make true claims about the real world.

Natural Language Inference

INTERNAL-CONSISTENCY CONSTRAINTS FOR EMERGENT COMMUNICATION

no code implementations25 Sep 2019 Charles Lovering, Ellie Pavlick

When communicating, humans rely on internally-consistent language representations.

Using Grounded Word Representations to Study Theories of Lexical Concepts

no code implementations WS 2019 Dylan Ebert, Ellie Pavlick

The fields of cognitive science and philosophy have proposed many different theories for how humans represent {``}concepts{''}.

Planning with State Abstractions for Non-Markovian Task Specifications

1 code implementation28 May 2019 Yoonseon Oh, Roma Patel, Thao Nguyen, Baichuan Huang, Ellie Pavlick, Stefanie Tellex

Often times, we specify tasks for a robot using temporal language that can also span different levels of abstraction.

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +1

Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference

5 code implementations ACL 2019 R. Thomas McCoy, Ellie Pavlick, Tal Linzen

We find that models trained on MNLI, including BERT, a state-of-the-art model, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics.

Natural Language Inference

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

no code implementations EMNLP (ACL) 2018 Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme

We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.

Natural Language Inference

Identifying 1950s American Jazz Musicians: Fine-Grained IsA Extraction via Modifier Composition

no code implementations ACL 2017 Ellie Pavlick, Marius Pa{\c{s}}ca

We present a method for populating fine-grained classes (e. g., {``}1950s American jazz musicians{''}) with instances (e. g., Charles Mingus ).

Optimizing Statistical Machine Translation for Text Simplification

1 code implementation TACL 2016 Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, Chris Callison-Burch

Most recent sentence simplification systems use basic machine translation models to learn lexical and syntactic paraphrases from a manually simplified parallel corpus.

Machine Translation Text Simplification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.