no code implementations • NAACL (DADC) 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
no code implementations • ACL 2022 • Isabel Papadimitriou, Richard Futrell, Kyle Mahowald
Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language.
no code implementations • NAACL (SIGTYP) 2022 • Sihan Chen, Richard Futrell, Kyle Mahowald
Using data from Nintemann et al. (2020), we explore the variability in complexity and informativity across spatial demonstrative systems using spatial deictic lexicons from 223 languages.
no code implementations • 16 Feb 2023 • Michail Mersinias, Kyle Mahowald
We explore incorporating natural language inference (NLI) into the text generative pipeline by using a pre-trained NLI model to assess whether a generated sentence entails, contradicts, or is neutral to the prompt and preceding text.
no code implementations • 29 Jan 2023 • Kyle Mahowald
I validate the prompt using the CoLA corpus of acceptability judgments and then zero in on the AANN construction.
no code implementations • 16 Jan 2023 • Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko
Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world.
no code implementations • 19 Dec 2022 • Jing Huang, Zhengxuan Wu, Kyle Mahowald, Christopher Potts
This allows us to encode robust, position-independent character-level information in the internal representations of subword-based models.
1 code implementation • 1 Nov 2022 • Anuj Diwan, Layne Berry, Eunsol Choi, David Harwath, Kyle Mahowald
Recent visuolinguistic pre-trained models show promising progress on various end tasks such as image retrieval and video captioning.
no code implementations • 29 Jun 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
1 code implementation • NAACL 2022 • Ayush Kaushal, Kyle Mahowald
Pre-trained language models (PLMs) that use subword tokenization schemes can succeed at a variety of language tasks that require character-level information, despite lacking explicit access to the character composition of tokens.
1 code implementation • 11 Mar 2022 • Isabel Papadimitriou, Richard Futrell, Kyle Mahowald
Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language.
no code implementations • 30 Jan 2022 • Kyle Mahowald, Evgeniia Diachek, Edward Gibson, Evelina Fedorenko, Richard Futrell
The conclusion is that grammatical cues such as word order are necessary to convey agenthood and patienthood in only at most 10-15% of naturally occurring sentences; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e. g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e. g., "The bone chewed the dog.
1 code implementation • EMNLP 2021 • Alex Jones, William Yang Wang, Kyle Mahowald
We verify some of our linguistic findings by looking at the effect of morphological segmentation on English-Inuktitut alignment, in addition to examining the effect of word order agreement on isomorphism for 66 zero-shot language pairs from a different corpus.
no code implementations • NAACL 2021 • Tiago Pimentel, Irene Nikkarinen, Kyle Mahowald, Ryan Cotterell, Damián Blasi
Examining corpora from 7 typologically diverse languages, we use those upper bounds to quantify the lexicon's optimality and to explore the relative costs of major constraints on natural codes.
1 code implementation • NeurIPS 2021 • Joshua Rozner, Christopher Potts, Kyle Mahowald
Cryptic crosswords, the dominant crossword variety in the UK, are a promising target for advancing NLP systems that seek to process semantically complex, highly compositional language.
1 code implementation • EACL 2021 • Isabel Papadimitriou, Ethan A. Chi, Richard Futrell, Kyle Mahowald
Further examining the characteristics that our classifiers rely on, we find that features such as passive voice, animacy and case strongly correlate with classification decisions, suggesting that mBERT does not encode subjecthood purely syntactically, but that subjecthood embedding is continuous and dependent on semantic and discourse factors, as is proposed in much of the functional linguistics literature.
1 code implementation • EMNLP 2020 • Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, Dan Jurafsky
Despite its importance to experimental design, statistical power (the probability that, given a real effect, an experiment will reject the null hypothesis) has largely been ignored by the NLP community.
no code implementations • 1 Oct 2015 • Richard Futrell, Kyle Mahowald, Edward Gibson
We address recent criticisms (Liu et al., 2015; Ferrer-i-Cancho and G\'omez-Rodr\'iguez, 2015) of our work on empirical evidence of dependency length minimization across languages (Futrell et al., 2015).