no code implementations • ACL 2022 • Isabel Papadimitriou, Richard Futrell, Kyle Mahowald
Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language.
1 code implementation • EMNLP 2021 • Neil Rathi, Michael Hahn, Richard Futrell
Linguistic typology generally divides synthetic languages into groups based on their morphological fusion.
1 code implementation • ACL (SIGMORPHON) 2021 • Huteng Dai, Richard Futrell
We introduce a simple and highly general phonotactic learner which induces a probabilistic finite-state automaton from word-form data.
no code implementations • NAACL (SIGTYP) 2022 • Sihan Chen, Richard Futrell, Kyle Mahowald
Using data from Nintemann et al. (2020), we explore the variability in complexity and informativity across spatial demonstrative systems using spatial deictic lexicons from 223 languages.
no code implementations • CMCL (ACL) 2022 • Richard Futrell
We investigate how to use pretrained static word embeddings to deliver improved estimates of bilexical co-occurrence probabilities: conditional probabilities of one word given a single other word in a specific relationship.
no code implementations • COLING 2022 • Michaela Socolof, Jacob Louis Hoover, Richard Futrell, Alessandro Sordoni, Timothy J. O’Donnell
Morphological systems across languages vary when it comes to the relation between form and meaning.
no code implementations • CL (ACL) 2022 • Himanshu Yadav, Samar Husain, Richard Futrell
In Experiment 1, we compare the distribution of formal properties of crossing dependencies, such as gap degree, between real trees and baseline trees matched for rate of crossing dependencies and various other properties.
no code implementations • EMNLP (CMCL) 2020 • Kartik Sharma, Richard Futrell, Samar Husain
In this work, we investigate whether the order and relative distance of preverbal dependents in Hindi, an SOV language, is affected by factors motivated by efficiency considerations during comprehension/production.
1 code implementation • 12 Jan 2024 • Julie Kallini, Isabel Papadimitriou, Richard Futrell, Kyle Mahowald, Christopher Potts
Chomsky and others have very directly claimed that large language models (LLMs) are equally capable of learning languages that are possible and impossible for humans to learn.
1 code implementation • 29 Dec 2023 • Manikanta Loya, Divya Anand Sinha, Richard Futrell
However, these studies have not always properly accounted for the sensitivity of LLMs' behavior to hyperparameters and variations in the prompt.
1 code implementation • 6 Jun 2023 • Thomas Hikaru Clark, Clara Meister, Tiago Pimentel, Michael Hahn, Ryan Cotterell, Richard Futrell, Roger Levy
Here, we ask whether a pressure for UID may have influenced word order patterns cross-linguistically.
no code implementations • 16 Dec 2022 • Jiaxuan Li, Richard Futrell
We advance an information-theoretic model of human language processing in the brain, in which incoming linguistic input is processed at two levels, in terms of a heuristic interpretation and in terms of error correction.
1 code implementation • 11 Mar 2022 • Isabel Papadimitriou, Richard Futrell, Kyle Mahowald
Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language.
no code implementations • 30 Jan 2022 • Kyle Mahowald, Evgeniia Diachek, Edward Gibson, Evelina Fedorenko, Richard Futrell
The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e. g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e. g., "The bone chewed the dog.
1 code implementation • 21 Apr 2021 • Michael Hahn, Dan Jurafsky, Richard Futrell
We introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity.
1 code implementation • EACL 2021 • Isabel Papadimitriou, Ethan A. Chi, Richard Futrell, Kyle Mahowald
Further examining the characteristics that our classifiers rely on, we find that features such as passive voice, animacy and case strongly correlate with classification decisions, suggesting that mBERT does not encode subjecthood purely syntactically, but that subjecthood embedding is continuous and dependent on semantic and discourse factors, as is proposed in much of the functional linguistics literature.
no code implementations • Findings (ACL) 2021 • William Dyer, Richard Futrell, Zoey Liu, Gregory Scontras
Languages vary in their placement of multiple adjectives before, after, or surrounding the noun, but they typically exhibit strong intra-language tendencies on the relative order of those adjectives (e. g., the preference for `big blue box' in English, `grande bo\^{i}te bleue' in French, and `alsund\={u}q al'azraq alkab\={\i}r' in Arabic).
no code implementations • EMNLP 2020 • Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros
Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.
no code implementations • ACL 2020 • Richard Futrell, William Dyer, Greg Scontras
The four theories we test are subjectivity (Scontras et al., 2017), information locality (Futrell, 2019), integration cost (Dyer, 2017), and information gain, which we introduce.
1 code implementation • 22 Jun 2020 • Richard Futrell
I present a computational-level model of semantic interference effects in word production.
no code implementations • WS 2019 • Ethan Wilcox, Roger Levy, Richard Futrell
Deep learning sequence models have led to a marked increase in performance for a range of Natural Language Processing tasks, but it remains an open question whether they are able to induce proper hierarchical generalizations for representing natural language from linear input alone.
no code implementations • NAACL 2019 • Aida Nematzadeh, Richard Futrell, Roger Levy
We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications.
no code implementations • 24 May 2019 • Ethan Wilcox, Roger Levy, Richard Futrell
Here, we provide new evidence that RNN language models are sensitive to hierarchical syntactic structure by investigating the filler--gap dependency and constraints on it, known as syntactic islands.
2 code implementations • NAACL 2019 • Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy
We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.
no code implementations • NAACL 2019 • Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy
State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.
1 code implementation • WS 2019 • Richard Futrell, Roger P. Levy
We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations.
no code implementations • WS 2018 • Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell
RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.
1 code implementation • 5 Sep 2018 • Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy
Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.
no code implementations • 31 Aug 2018 • Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell
RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.
1 code implementation • 8 Sep 2017 • Richard Futrell, Roger Levy, Matthew Dryer
A frequent object of study in linguistic typology is the order of elements {demonstrative, adjective, numeral, noun} in the noun phrase.
1 code implementation • LREC 2018 • Richard Futrell, Edward Gibson, Hal Tily, Idan Blank, Anastasia Vishnevetsky, Steven T. Piantadosi, Evelina Fedorenko
It is now a common practice to compare models of human language processing by predicting participant reactions (such as reading times) to corpora consisting of rich naturalistic linguistic materials.
no code implementations • EACL 2017 • Richard Futrell, Roger Levy
We use the noisy-channel theory of human sentence comprehension to develop an incremental processing cost model that unifies and extends key features of expectation-based and memory-based models.
no code implementations • TACL 2017 • Richard Futrell, Adam Albright, Peter Graff, Timothy J. O{'}Donnell
We present a probabilistic model of phonotactics, the set of well-formed phoneme sequences in a language.
no code implementations • WS 2016 • Cory Shain, Marten Van Schijndel, Richard Futrell, Edward Gibson, William Schuler
Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli.
no code implementations • 1 Oct 2015 • Richard Futrell, Kyle Mahowald, Edward Gibson
We address recent criticisms (Liu et al., 2015; Ferrer-i-Cancho and G\'omez-Rodr\'iguez, 2015) of our work on empirical evidence of dependency length minimization across languages (Futrell et al., 2015).