Search Results for author: Ethan Wilcox

Found 18 papers, 8 papers with code

[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

no code implementations9 Apr 2024 Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, Chengxu Zhuang

The big changes for this year's competition are as follows: First, we replace the loose track with a paper track, which allows (for example) non-model-based submissions, novel cognitively-inspired benchmarks, or analysis techniques.

Quantifying the redundancy between prosody and text

1 code implementation28 Nov 2023 Lukas Wolf, Tiago Pimentel, Evelina Fedorenko, Ryan Cotterell, Alex Warstadt, Ethan Wilcox, Tamar Regev

Using a large spoken corpus of English audiobooks, we extract prosodic features aligned to individual words and test how well they can be predicted from LLM embeddings, compared to non-contextual word embeddings.

Word Embeddings

Controlled Text Generation with Natural Language Instructions

1 code implementation27 Apr 2023 Wangchunshu Zhou, Yuchen Eleanor Jiang, Ethan Wilcox, Ryan Cotterell, Mrinmaya Sachan

Large language models generate fluent texts and can follow natural language instructions to solve a wide range of tasks without task-specific training.

In-Context Learning Language Modelling +1

Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

1 code implementation27 Jan 2023 Alex Warstadt, Leshem Choshen, Aaron Mueller, Adina Williams, Ethan Wilcox, Chengxu Zhuang

In partnership with CoNLL and CMCL, we provide a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children.

Language Acquisition Language Modelling +1

A Targeted Assessment of Incremental Processing in Neural Language Models and Humans

no code implementations ACL 2021 Ethan Wilcox, Pranali Vani, Roger Levy

We present a targeted, scaled-up comparison of incremental processing in humans and neural language models by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena.

Language Modelling Sentence

Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization

1 code implementation EMNLP (BlackboxNLP) 2020 Tristan Thrush, Ethan Wilcox, Roger Levy

Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training.

Few-Shot Learning

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

no code implementations EMNLP 2020 Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros

Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.

Few-Shot Learning Sentence

A Systematic Assessment of Syntactic Generalization in Neural Language Models

1 code implementation ACL 2020 Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy

While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.

Language Modelling

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

1 code implementation IJCNLP 2019 Aixiu An, Peng Qian, Ethan Wilcox, Roger Levy

We assess whether different neural language models trained on English and French represent phrase-level number and gender features, and use those features to drive downstream expectations.

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

no code implementations WS 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Deep learning sequence models have led to a marked increase in performance for a range of Natural Language Processing tasks, but it remains an open question whether they are able to induce proper hierarchical generalizations for representing natural language from linear input alone.

Open-Ended Question Answering

What Syntactic Structures block Dependencies in RNN Language Models?

no code implementations24 May 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands.

Language Modelling

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

2 code implementations NAACL 2019 Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

no code implementations NAACL 2019 Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy

State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.

Language Modelling

What do RNN Language Models Learn about Filler--Gap Dependencies?

no code implementations WS 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Language Modelling Machine Translation

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

1 code implementation5 Sep 2018 Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy

Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.

Language Modelling

What do RNN Language Models Learn about Filler-Gap Dependencies?

no code implementations31 Aug 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Cannot find the paper you are looking for? You can Submit a new open access paper.