Search Results for author: Roger Levy

Found 56 papers, 21 papers with code

Predicting scalar diversity with context-driven uncertainty over alternatives

no code implementations CMCL (ACL) 2022 Jennifer Hu, Roger Levy, Sebastian Schuster

Here, we test the hypothesis that SI rates depend on the listener’s confidence in the underlying scale, which we operationalize as uncertainty over the distribution of possible alternatives conditioned on the context.

Sentence Sentence Embedding +2

Flexible Generation from Fragmentary Linguistic Input

1 code implementation ACL 2022 Peng Qian, Roger Levy

We hypothesize that human performance is better characterized by flexible inference through composition of basic computational motifs available to the human language user.

LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers

1 code implementation23 Oct 2023 Theo X. Olausson, Alex Gu, Benjamin Lipkin, Cedegao E. Zhang, Armando Solar-Lezama, Joshua B. Tenenbaum, Roger Levy

Logical reasoning, i. e., deductively inferring the truth value of a conclusion from a set of premises, is an important task for artificial intelligence with wide potential impacts on science, mathematics, and society.

Logical Reasoning

Probing self-supervised speech models for phonetic and phonemic information: a case study in aspiration

no code implementations9 Jun 2023 Kinan Martin, Jon Gauthier, Canaan Breiss, Roger Levy

Textless self-supervised speech models have grown in capabilities in recent years, but the nature of the linguistic information they encode has not yet been thoroughly examined.

The neural dynamics of auditory word recognition and integration

no code implementations22 May 2023 Jon Gauthier, Roger Levy

We fit this model to explain scalp EEG signals recorded as subjects passively listened to a fictional story, revealing both the dynamics of the online auditory word recognition process and the neural correlates of the recognition and integration of words.

EEG

Prompting is not a substitute for probability measurements in large language models

1 code implementation22 May 2023 Jennifer Hu, Roger Levy

Prompting is now a dominant method for evaluating the linguistic knowledge of large language models (LLMs).

Expectations over Unspoken Alternatives Predict Pragmatic Inferences

1 code implementation7 Apr 2023 Jennifer Hu, Roger Levy, Judith Degen, Sebastian Schuster

Here, we test a shared mechanism explaining SI rates within and across scales: context-driven expectations about the unspoken alternatives.

Language model acceptability judgements are not always robust to context

no code implementations18 Dec 2022 Koustuv Sinha, Jon Gauthier, Aaron Mueller, Kanishka Misra, Keren Fuentes, Roger Levy, Adina Williams

In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality.

In-Context Learning Language Modelling +1

On the Effect of Anticipation on Reading Times

1 code implementation25 Nov 2022 Tiago Pimentel, Clara Meister, Ethan G. Wilcox, Roger Levy, Ryan Cotterell

We assess the effect of anticipation on reading by comparing how well surprisal and contextual entropy predict reading times on four naturalistic reading datasets: two self-paced and two eye-tracking.

Towards Human-Agent Communication via the Information Bottleneck Principle

no code implementations30 Jun 2022 Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky

Emergent communication research often focuses on optimizing task-specific utility as a driver for communication.

Informativeness

Assessing Group-level Gender Bias in Professional Evaluations: The Case of Medical Student End-of-Shift Feedback

no code implementations NAACL (GeBNLP) 2022 Emmy Liu, Michael Henry Tessler, Nicole Dubosh, Katherine Mosher Hiller, Roger Levy

Although approximately 50% of medical school graduates today are women, female physicians tend to be underrepresented in senior positions, make less money than their male counterparts and receive fewer promotions.

Topic Models

Analyzing Wrap-Up Effects through an Information-Theoretic Lens

no code implementations ACL 2022 Clara Meister, Tiago Pimentel, Thomas Hikaru Clark, Ryan Cotterell, Roger Levy

Numerous analyses of reading time (RT) data have been implemented -- all in an effort to better understand the cognitive processes driving reading comprehension.

Reading Comprehension Sentence

Revisiting the Uniform Information Density Hypothesis

no code implementations EMNLP 2021 Clara Meister, Tiago Pimentel, Patrick Haller, Lena Jäger, Ryan Cotterell, Roger Levy

The uniform information density (UID) hypothesis posits a preference among language users for utterances structured such that information is distributed uniformly across a signal.

Linguistic Acceptability Sentence

Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models

1 code implementation EMNLP 2021 Yiwen Wang, Jennifer Hu, Roger Levy, Peng Qian

We find suggestive evidence that structural supervision helps with representing syntactic state across intervening content and improves performance in low-data settings, suggesting that the benefits of hierarchical inductive biases in acquiring dependency relationships may extend beyond English.

Inductive Bias

Scalable pragmatic communication via self-supervision

no code implementations12 Aug 2021 Jennifer Hu, Roger Levy, Noga Zaslavsky

Models of context-sensitive communication often use the Rational Speech Act framework (RSA; Frank & Goodman, 2012), which formulates listeners and speakers in a cooperative reasoning process.

A Targeted Assessment of Incremental Processing in Neural Language Models and Humans

no code implementations ACL 2021 Ethan Wilcox, Pranali Vani, Roger Levy

We present a targeted, scaled-up comparison of incremental processing in humans and neural language models by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena.

Language Modelling Sentence

Structural Guidance for Transformer Language Models

1 code implementation ACL 2021 Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo

Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.

Language Modelling

What if This Modified That? Syntactic Interventions via Counterfactual Embeddings

1 code implementation28 May 2021 Mycal Tucker, Peng Qian, Roger Levy

Neural language models exhibit impressive performance on a variety of tasks, but their internal reasoning may be difficult to understand.

counterfactual

Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization

1 code implementation EMNLP (BlackboxNLP) 2020 Tristan Thrush, Ethan Wilcox, Roger Levy

Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training.

Few-Shot Learning

Cloze Distillation: Improving Neural Language Models with Human Next-Word Prediction

no code implementations CONLL 2020 Tiwalayo Eisape, Noga Zaslavsky, Roger Levy

Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing.

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

no code implementations EMNLP 2020 Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros

Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.

Few-Shot Learning Sentence

Bridging Information-Seeking Human Gaze and Machine Reading Comprehension

no code implementations CONLL 2020 Jonathan Malmaud, Roger Levy, Yevgeni Berzak

In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension.

Machine Reading Comprehension Multiple-choice +1

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

1 code implementation2 Jun 2020 Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, Roger Levy

Human reading behavior is tuned to the statistics of natural language: the time it takes human subjects to read a word can be predicted from estimates of the word's probability in context.

Open-Ended Question Answering

STARC: Structured Annotations for Reading Comprehension

1 code implementation ACL 2020 Yevgeni Berzak, Jonathan Malmaud, Roger Levy

We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions.

Multiple-choice Reading Comprehension

Linking artificial and human neural representations of language

1 code implementation IJCNLP 2019 Jon Gauthier, Roger Levy

Through further task ablations and representational analyses, we find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance.

Brain Decoding Natural Language Understanding +1

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

1 code implementation IJCNLP 2019 Aixiu An, Peng Qian, Ethan Wilcox, Roger Levy

We assess whether different neural language models trained on English and French represent phrase-level number and gender features, and use those features to drive downstream expectations.

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

no code implementations WS 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Deep learning sequence models have led to a marked increase in performance for a range of Natural Language Processing tasks, but it remains an open question whether they are able to induce proper hierarchical generalizations for representing natural language from linear input alone.

Open-Ended Question Answering

Language Learning and Processing in People and Machines

no code implementations NAACL 2019 Aida Nematzadeh, Richard Futrell, Roger Levy

We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications.

Language Acquisition Machine Translation +2

What Syntactic Structures block Dependencies in RNN Language Models?

no code implementations24 May 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands.

Language Modelling

Availability-Based Production Predicts Speakers' Real-time Choices of Mandarin Classifiers

no code implementations17 May 2019 Meilin Zhan, Roger Levy

When the upcoming noun is less predictable, the use of a more specific classifier would reduce surprisal at the noun thus potentially facilitate comprehension (predicted by Uniform Information Density, Levy & Jaeger, 2007), but the use of that more specific classifier may be dispreferred from a production standpoint if accessing the general classifier is always available (predicted by Availability-Based Production; Bock, 1987; Ferreira & Dell, 2000).

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

2 code implementations NAACL 2019 Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

no code implementations NAACL 2019 Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy

State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.

Language Modelling

What do RNN Language Models Learn about Filler--Gap Dependencies?

no code implementations WS 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Language Modelling Machine Translation

Comparing Models of Associative Meaning: An Empirical Investigation of Reference in Simple Language Games

1 code implementation CONLL 2018 Judy Hanwen Shen, Matthias Hofer, Bjarke Felbo, Roger Levy

These results shed light on the nature of the lexical resources that speakers and listeners can bring to bear in achieving reference through associative meaning alone.

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

1 code implementation5 Sep 2018 Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy

Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.

Language Modelling

What do RNN Language Models Learn about Filler-Gap Dependencies?

no code implementations31 Aug 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Word learning and the acquisition of syntactic--semantic overhypotheses

no code implementations14 May 2018 Jon Gauthier, Roger Levy, Joshua B. Tenenbaum

Children learning their first language face multiple problems of induction: how to learn the meanings of words, and how to build meaningful phrases from those words according to syntactic rules.

Language Acquisition

Assessing Language Proficiency from Eye Movements in Reading

no code implementations NAACL 2018 Yevgeni Berzak, Boris Katz, Roger Levy

We present a novel approach for determining learners' second language proficiency which utilizes behavioral traces of eye movements during reading.

A Statistical Comparison of Some Theories of NP Word Order

1 code implementation8 Sep 2017 Richard Futrell, Roger Levy, Matthew Dryer

A frequent object of study in linguistic typology is the order of elements {demonstrative, adjective, numeral, noun} in the noun phrase.

regression

Noisy-context surprisal as a human sentence processing cost model

no code implementations EACL 2017 Richard Futrell, Roger Levy

We use the noisy-channel theory of human sentence comprehension to develop an incremental processing cost model that unifies and extends key features of expectation-based and memory-based models.

Sentence

Data-driven learning of symbolic constraints for a log-linear model in a phonological setting

no code implementations COLING 2016 Gabriel Doyle, Roger Levy

We propose a non-parametric Bayesian model for learning and weighting symbolically-defined constraints to populate a log-linear model.

Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.