no code implementations • 1 Mar 2024 • Polina Tsvilodub, Hening Wang, Sharon Grosch, Michael Franke
This paper systematically compares different methods of deriving item-level predictions of language models for multiple-choice tasks.
no code implementations • 22 May 2023 • Polina Tsvilodub, Michael Franke
Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans.
no code implementations • 11 May 2023 • Polina Tsvilodub, Michael Franke, Robert D. Hawkins, Noah D. Goodman
When faced with a polar question, speakers often provide overinformative answers going beyond a simple "yes" or "no".
no code implementations • 20 May 2021 • Gregory Scontras, Michael Henry Tessler, Michael Franke
Recent advances in computational cognitive science (i. e., simulation-based probabilistic programs) have paved the way for significant progress in formal, implementable models of pragmatics.
no code implementations • 12 May 2021 • Britta Grusdt, Daniel Lassiter, Michael Franke
While a large body of work has scrutinized the meaning of conditional sentences, considerably less attention has been paid to formal models of their pragmatic use and interpretation.
1 code implementation • 12 Apr 2021 • Robert D. Hawkins, Michael Franke, Michael C. Frank, Adele E. Goldberg, Kenny Smith, Thomas L. Griffiths, Noah D. Goodman
Languages are powerful solutions to coordination problems: they provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads.