no code implementations • CMCL (ACL) 2022 • Jennifer Hu, Roger Levy, Sebastian Schuster
Here, we test the hypothesis that SI rates depend on the listener’s confidence in the underlying scale, which we operationalize as uncertainty over the distribution of possible alternatives conditioned on the context.
1 code implementation • 3 Apr 2024 • Jennifer Hu, Michael C. Frank
Developmental psychologists have argued about when cognitive capacities such as language understanding or theory of mind emerge.
1 code implementation • 19 Jan 2024 • Jennifer Hu, Kyle Mahowald, Gary Lupyan, Anna Ivanova, Roger Levy
Do Large Language Models (LLMs) make human-like linguistic generalizations?
1 code implementation • 22 May 2023 • Jennifer Hu, Roger Levy
Prompting is now a dominant method for evaluating the linguistic knowledge of large language models (LLMs).
1 code implementation • 7 Apr 2023 • Jennifer Hu, Roger Levy, Judith Degen, Sebastian Schuster
Here, we test a shared mechanism explaining SI rates within and across scales: context-driven expectations about the unspoken alternatives.
no code implementations • 20 Dec 2022 • Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, Prithviraj Ammanabrolu
We propose a novel task, G4C, to study teacher-student natural language interactions in a goal-driven and grounded environment.
1 code implementation • 13 Dec 2022 • Jennifer Hu, Sammy Floyd, Olessia Jouravlev, Evelina Fedorenko, Edward Gibson
We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials.
no code implementations • 15 Nov 2022 • Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh
People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication.
1 code implementation • EMNLP 2021 • Yiwen Wang, Jennifer Hu, Roger Levy, Peng Qian
We find suggestive evidence that structural supervision helps with representing syntactic state across intervening content and improves performance in low-data settings, suggesting that the benefits of hierarchical inductive biases in acquiring dependency relationships may extend beyond English.
no code implementations • 12 Aug 2021 • Jennifer Hu, Roger Levy, Noga Zaslavsky
Models of context-sensitive communication often use the Rational Speech Act framework (RSA; Frank & Goodman, 2012), which formulates listeners and speakers in a cooperative reasoning process.
no code implementations • ACL 2020 • Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, Roger Levy
Targeted syntactic evaluations have yielded insights into the generalizations learned by neural network language models.
1 code implementation • 2 Jun 2020 • Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, Roger Levy
Human reading behavior is tuned to the statistics of natural language: the time it takes human subjects to read a word can be predicted from estimates of the word's probability in context.
no code implementations • 13 May 2020 • Noga Zaslavsky, Jennifer Hu, Roger P. Levy
What computational principles underlie human pragmatic reasoning?
1 code implementation • ACL 2020 • Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy
While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.
1 code implementation • NAACL 2018 • Will Monroe, Jennifer Hu, Andrew Jong, Christopher Potts
Contextual influences on language often exhibit substantial cross-lingual regularities; for example, we are more verbose in situations that require finer distinctions.