Search Results for author: Jennifer Hu

Found 17 papers, 10 papers with code

Predicting scalar diversity with context-driven uncertainty over alternatives

no code implementations CMCL (ACL) 2022 Jennifer Hu, Roger Levy, Sebastian Schuster

Here, we test the hypothesis that SI rates depend on the listener’s confidence in the underlying scale, which we operationalize as uncertainty over the distribution of possible alternatives conditioned on the context.

Sentence Sentence Embedding +2

Auxiliary task demands mask the capabilities of smaller language models

1 code implementation3 Apr 2024 Jennifer Hu, Michael C. Frank

Developmental psychologists have argued about when cognitive capacities such as language understanding or theory of mind emerge.

Prompting is not a substitute for probability measurements in large language models

1 code implementation22 May 2023 Jennifer Hu, Roger Levy

Prompting is now a dominant method for evaluating the linguistic knowledge of large language models (LLMs).

Expectations over Unspoken Alternatives Predict Pragmatic Inferences

1 code implementation7 Apr 2023 Jennifer Hu, Roger Levy, Judith Degen, Sebastian Schuster

Here, we test a shared mechanism explaining SI rates within and across scales: context-driven expectations about the unspoken alternatives.

A fine-grained comparison of pragmatic language understanding in humans and language models

1 code implementation13 Dec 2022 Jennifer Hu, Sammy Floyd, Olessia Jouravlev, Evelina Fedorenko, Edward Gibson

We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials.

Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches

no code implementations15 Nov 2022 Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh

People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication.

Grounded language learning

Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models

1 code implementation EMNLP 2021 Yiwen Wang, Jennifer Hu, Roger Levy, Peng Qian

We find suggestive evidence that structural supervision helps with representing syntactic state across intervening content and improves performance in low-data settings, suggesting that the benefits of hierarchical inductive biases in acquiring dependency relationships may extend beyond English.

Inductive Bias

Scalable pragmatic communication via self-supervision

no code implementations12 Aug 2021 Jennifer Hu, Roger Levy, Noga Zaslavsky

Models of context-sensitive communication often use the Rational Speech Act framework (RSA; Frank & Goodman, 2012), which formulates listeners and speakers in a cooperative reasoning process.

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

1 code implementation2 Jun 2020 Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, Roger Levy

Human reading behavior is tuned to the statistics of natural language: the time it takes human subjects to read a word can be predicted from estimates of the word's probability in context.

Open-Ended Question Answering

A Rate-Distortion view of human pragmatic reasoning

no code implementations13 May 2020 Noga Zaslavsky, Jennifer Hu, Roger P. Levy

What computational principles underlie human pragmatic reasoning?

A Systematic Assessment of Syntactic Generalization in Neural Language Models

1 code implementation ACL 2020 Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy

While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.

Language Modelling

Generating Bilingual Pragmatic Color References

1 code implementation NAACL 2018 Will Monroe, Jennifer Hu, Andrew Jong, Christopher Potts

Contextual influences on language often exhibit substantial cross-lingual regularities; for example, we are more verbose in situations that require finer distinctions.

Cannot find the paper you are looking for? You can Submit a new open access paper.