1 code implementation • EMNLP 2021 • Mario Giulianelli, Arabella Sinclair, Raquel Fernández
The Uniform Information Density principle states that speakers plan their utterances to reduce fluctuations in the density of the information transmitted.
no code implementations • *SEM (NAACL) 2022 • Samuel Ryb, Mario Giulianelli, Arabella Sinclair, Raquel Fernández
We investigate the extent to which pre-trained language models acquire analytical and deductive logical reasoning capabilities as a side effect of learning word prediction.
no code implementations • 21 Nov 2023 • Aron Molnar, Jaap Jumelet, Mario Giulianelli, Arabella Sinclair
Language models are often used as the backbone of modern dialogue systems.
1 code implementation • 15 Oct 2022 • Mario Giulianelli, Arabella Sinclair, Raquel Fernández
We hypothesise that speakers use construction repetition to mitigate information rate, leading to an overall decrease in utterance information content over the course of a dialogue.
no code implementations • 6 Oct 2022 • Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell, Zhijing Jin
We present a taxonomy for characterising and understanding generalisation research in NLP.
1 code implementation • 30 Sep 2021 • Arabella Sinclair, Jaap Jumelet, Willem Zuidema, Raquel Fernández
We investigate the extent to which modern, neural language models are susceptible to structural priming, the phenomenon whereby the structure of a sentence makes the same structure more probable in a follow-up sentence.
no code implementations • EMNLP 2020 • Ece Takmaz, Mario Giulianelli, Sandro Pezzelle, Arabella Sinclair, Raquel Fernández
We propose a generation model that produces referring utterances grounded in both the visual and the conversational context.
no code implementations • WS 2018 • Arabella Sinclair, Adam Lopez, C. G. Lucas, Dragan Gasevic
We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.