Search Results for author: Richard Futrell

Found 45 papers, 14 papers with code

When classifying grammatical role, BERT doesn’t care about word order... except when it matters

no code implementations ACL 2022 Isabel Papadimitriou, Richard Futrell, Kyle Mahowald

Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language.

An Information-Theoretic Characterization of Morphological Fusion

1 code implementation EMNLP 2021 Neil Rathi, Michael Hahn, Richard Futrell

Linguistic typology generally divides synthetic languages into groups based on their morphological fusion.

Simple induction of (deterministic) probabilistic finite-state automata for phonotactics by stochastic gradient descent

1 code implementation ACL (SIGMORPHON) 2021 Huteng Dai, Richard Futrell

We introduce a simple and highly general phonotactic learner which induces a probabilistic finite-state automaton from word-form data.

Investigating Information-Theoretic Properties of the Typology of Spatial Demonstratives

no code implementations NAACL (SIGTYP) 2022 Sihan Chen, Richard Futrell, Kyle Mahowald

Using data from Nintemann et al. (2020), we explore the variability in complexity and informativity across spatial demonstrative systems using spatial deictic lexicons from 223 languages.

Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model

no code implementations CMCL (ACL) 2022 Richard Futrell

We investigate how to use pretrained static word embeddings to deliver improved estimates of bilexical co-occurrence probabilities: conditional probabilities of one word given a single other word in a specific relationship.

Word Embeddings

Assessing Corpus Evidence for Formal and Psycholinguistic Constraints on Nonprojectivity

no code implementations CL (ACL) 2022 Himanshu Yadav, Samar Husain, Richard Futrell

In Experiment 1, we compare the distribution of formal properties of crossing dependencies, such as gap degree, between real trees and baseline trees matched for rate of crossing dependencies and various other properties.

What Determines the Order of Verbal Dependents in Hindi? Effects of Efficiency in Comprehension and Production

no code implementations EMNLP (CMCL) 2020 Kartik Sharma, Richard Futrell, Samar Husain

In this work, we investigate whether the order and relative distance of preverbal dependents in Hindi, an SOV language, is affected by factors motivated by efficiency considerations during comprehension/production.

Mission: Impossible Language Models

1 code implementation12 Jan 2024 Julie Kallini, Isabel Papadimitriou, Richard Futrell, Kyle Mahowald, Christopher Potts

Chomsky and others have very directly claimed that large language models (LLMs) are equally capable of learning languages that are possible and impossible for humans to learn.

Exploring the Sensitivity of LLMs' Decision-Making Capabilities: Insights from Prompt Variation and Hyperparameters

1 code implementation29 Dec 2023 Manikanta Loya, Divya Anand Sinha, Richard Futrell

However, these studies have not always properly accounted for the sensitivity of LLMs' behavior to hyperparameters and variations in the prompt.

Decision Making

A unified information-theoretic model of EEG signatures of human language processing

no code implementations16 Dec 2022 Jiaxuan Li, Richard Futrell

We advance an information-theoretic model of human language processing in the brain, in which incoming linguistic input is processed at two levels, in terms of a heuristic interpretation and in terms of error correction.

EEG ERP

When classifying grammatical role, BERT doesn't care about word order... except when it matters

1 code implementation11 Mar 2022 Isabel Papadimitriou, Richard Futrell, Kyle Mahowald

Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language.

Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages

no code implementations30 Jan 2022 Kyle Mahowald, Evgeniia Diachek, Edward Gibson, Evelina Fedorenko, Richard Futrell

The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e. g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e. g., "The bone chewed the dog.

Sentence World Knowledge

Sensitivity as a Complexity Measure for Sequence Classification Tasks

1 code implementation21 Apr 2021 Michael Hahn, Dan Jurafsky, Richard Futrell

We introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity.

General Classification text-classification +1

Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT

1 code implementation EACL 2021 Isabel Papadimitriou, Ethan A. Chi, Richard Futrell, Kyle Mahowald

Further examining the characteristics that our classifiers rely on, we find that features such as passive voice, animacy and case strongly correlate with classification decisions, suggesting that mBERT does not encode subjecthood purely syntactically, but that subjecthood embedding is continuous and dependent on semantic and discourse factors, as is proposed in much of the functional linguistics literature.

Sentence

Predicting cross-linguistic adjective order with information gain

no code implementations Findings (ACL) 2021 William Dyer, Richard Futrell, Zoey Liu, Gregory Scontras

Languages vary in their placement of multiple adjectives before, after, or surrounding the noun, but they typically exhibit strong intra-language tendencies on the relative order of those adjectives (e. g., the preference for `big blue box' in English, `grande bo\^{i}te bleue' in French, and `alsund\={u}q al'azraq alkab\={\i}r' in Arabic).

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

no code implementations EMNLP 2020 Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros

Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.

Few-Shot Learning Sentence

What determines the order of adjectives in English? Comparing efficiency-based theories using dependency treebanks

no code implementations ACL 2020 Richard Futrell, William Dyer, Greg Scontras

The four theories we test are subjectivity (Scontras et al., 2017), information locality (Futrell, 2019), integration cost (Dyer, 2017), and information gain, which we introduce.

An information-theoretic account of semantic interference in word production

1 code implementation22 Jun 2020 Richard Futrell

I present a computational-level model of semantic interference effects in word production.

Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

no code implementations WS 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Deep learning sequence models have led to a marked increase in performance for a range of Natural Language Processing tasks, but it remains an open question whether they are able to induce proper hierarchical generalizations for representing natural language from linear input alone.

Open-Ended Question Answering

Language Learning and Processing in People and Machines

no code implementations NAACL 2019 Aida Nematzadeh, Richard Futrell, Roger Levy

We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications.

Language Acquisition Machine Translation +2

What Syntactic Structures block Dependencies in RNN Language Models?

no code implementations24 May 2019 Ethan Wilcox, Roger Levy, Richard Futrell

Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands.

Language Modelling

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

2 code implementations NAACL 2019 Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

no code implementations NAACL 2019 Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy

State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.

Language Modelling

Do RNNs learn human-like abstract word order preferences?

1 code implementation WS 2019 Richard Futrell, Roger P. Levy

We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations.

Language Modelling Sentence

What do RNN Language Models Learn about Filler--Gap Dependencies?

no code implementations WS 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Language Modelling Machine Translation

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

1 code implementation5 Sep 2018 Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy

Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.

Language Modelling

What do RNN Language Models Learn about Filler-Gap Dependencies?

no code implementations31 Aug 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

A Statistical Comparison of Some Theories of NP Word Order

1 code implementation8 Sep 2017 Richard Futrell, Roger Levy, Matthew Dryer

A frequent object of study in linguistic typology is the order of elements {demonstrative, adjective, numeral, noun} in the noun phrase.

regression

The Natural Stories Corpus

1 code implementation LREC 2018 Richard Futrell, Edward Gibson, Hal Tily, Idan Blank, Anastasia Vishnevetsky, Steven T. Piantadosi, Evelina Fedorenko

It is now a common practice to compare models of human language processing by predicting participant reactions (such as reading times) to corpora consisting of rich naturalistic linguistic materials.

Noisy-context surprisal as a human sentence processing cost model

no code implementations EACL 2017 Richard Futrell, Roger Levy

We use the noisy-channel theory of human sentence comprehension to develop an incremental processing cost model that unifies and extends key features of expectation-based and memory-based models.

Sentence

A Generative Model of Phonotactics

no code implementations TACL 2017 Richard Futrell, Adam Albright, Peter Graff, Timothy J. O{'}Donnell

We present a probabilistic model of phonotactics, the set of well-formed phoneme sequences in a language.

Memory access during incremental sentence processing causes reading time latency

no code implementations WS 2016 Cory Shain, Marten Van Schijndel, Richard Futrell, Edward Gibson, William Schuler

Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli.

Sentence

Response to Liu, Xu, and Liang (2015) and Ferrer-i-Cancho and Gómez-Rodríguez (2015) on Dependency Length Minimization

no code implementations1 Oct 2015 Richard Futrell, Kyle Mahowald, Edward Gibson

We address recent criticisms (Liu et al., 2015; Ferrer-i-Cancho and G\'omez-Rodr\'iguez, 2015) of our work on empirical evidence of dependency length minimization across languages (Futrell et al., 2015).

Cannot find the paper you are looking for? You can Submit a new open access paper.