Search Results for author: Richard Futrell

Found 26 papers, 8 papers with code

Sensitivity as a Complexity Measure for Sequence Classification Tasks

1 code implementation21 Apr 2021 Michael Hahn, Dan Jurafsky, Richard Futrell

We introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity.

Classification General Classification +1

Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT

1 code implementation EACL 2021 Isabel Papadimitriou, Ethan A. Chi, Richard Futrell, Kyle Mahowald

Further examining the characteristics that our classifiers rely on, we find that features such as passive voice, animacy and case strongly correlate with classification decisions, suggesting that mBERT does not encode subjecthood purely syntactically, but that subjecthood embedding is continuous and dependent on semantic and discourse factors, as is proposed in much of the functional linguistics literature.

Predicting cross-linguistic adjective order with information gain

no code implementations30 Dec 2020 William Dyer, Richard Futrell, Zoey Liu, Gregory Scontras

Languages vary in their placement of multiple adjectives before, after, or surrounding the noun, but they typically exhibit strong intra-language tendencies on the relative order of those adjectives (e. g., the preference for `big blue box' in English, `grande bo\^{i}te bleue' in French, and `alsund\={u}q al'azraq alkab\={\i}r' in Arabic).

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

no code implementations EMNLP 2020 Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros

Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.

Few-Shot Learning

What determines the order of adjectives in English? Comparing efficiency-based theories using dependency treebanks

no code implementations ACL 2020 Richard Futrell, William Dyer, Greg Scontras

The four theories we test are subjectivity (Scontras et al., 2017), information locality (Futrell, 2019), integration cost (Dyer, 2017), and information gain, which we introduce.

An information-theoretic account of semantic interference in word production

1 code implementation22 Jun 2020 Richard Futrell

I present a computational-level model of semantic interference effects in word production.

Neurons and Cognition

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

no code implementations WS 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Deep learning sequence models have led to a marked increase in performance for a range of Natural Language Processing tasks, but it remains an open question whether they are able to induce proper hierarchical generalizations for representing natural language from linear input alone.

Language Learning and Processing in People and Machines

no code implementations NAACL 2019 Aida Nematzadeh, Richard Futrell, Roger Levy

We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications.

Language Acquisition Machine Translation +1

What Syntactic Structures block Dependencies in RNN Language Models?

no code implementations24 May 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands.

Language Modelling

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

1 code implementation NAACL 2019 Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

no code implementations NAACL 2019 Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy

State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.

Hierarchical structure Language Modelling

Do RNNs learn human-like abstract word order preferences?

1 code implementation WS 2019 Richard Futrell, Roger P. Levy

We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations.

Language Modelling

What do RNN Language Models Learn about Filler--Gap Dependencies?

no code implementations WS 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Language Modelling Machine Translation

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

1 code implementation5 Sep 2018 Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy

Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.

Language Modelling

What do RNN Language Models Learn about Filler-Gap Dependencies?

no code implementations31 Aug 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

A Statistical Comparison of Some Theories of NP Word Order

1 code implementation8 Sep 2017 Richard Futrell, Roger Levy, Matthew Dryer

A frequent object of study in linguistic typology is the order of elements {demonstrative, adjective, numeral, noun} in the noun phrase.

The Natural Stories Corpus

1 code implementation LREC 2018 Richard Futrell, Edward Gibson, Hal Tily, Idan Blank, Anastasia Vishnevetsky, Steven T. Piantadosi, Evelina Fedorenko

It is now a common practice to compare models of human language processing by predicting participant reactions (such as reading times) to corpora consisting of rich naturalistic linguistic materials.

Noisy-context surprisal as a human sentence processing cost model

no code implementations EACL 2017 Richard Futrell, Roger Levy

We use the noisy-channel theory of human sentence comprehension to develop an incremental processing cost model that unifies and extends key features of expectation-based and memory-based models.

A Generative Model of Phonotactics

no code implementations TACL 2017 Richard Futrell, Adam Albright, Peter Graff, Timothy J. O{'}Donnell

We present a probabilistic model of phonotactics, the set of well-formed phoneme sequences in a language.

Memory access during incremental sentence processing causes reading time latency

no code implementations WS 2016 Cory Shain, Marten Van Schijndel, Richard Futrell, Edward Gibson, William Schuler

Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli.

Response to Liu, Xu, and Liang (2015) and Ferrer-i-Cancho and Gómez-Rodríguez (2015) on Dependency Length Minimization

no code implementations1 Oct 2015 Richard Futrell, Kyle Mahowald, Edward Gibson

We address recent criticisms (Liu et al., 2015; Ferrer-i-Cancho and G\'omez-Rodr\'iguez, 2015) of our work on empirical evidence of dependency length minimization across languages (Futrell et al., 2015).

Cannot find the paper you are looking for? You can Submit a new open access paper.