no code implementations • 6 Apr 2024 • Kevin Du, Vésteinn Snæbjarnarson, Niklas Stoehr, Jennifer C. White, Aaron Schein, Ryan Cotterell
To answer a question, language models often need to integrate prior knowledge learned during pretraining and new information presented in context.
1 code implementation • 23 Nov 2022 • Jennifer C. White, Ryan Cotterell
Recent work has shown that despite their impressive capabilities, text-to-image diffusion models such as DALL-E 2 (Ramesh et al., 2022) can display strange behaviours when a prompt contains a word with multiple possible meanings, often generating images containing both senses of the word (Rassin et al., 2022).
1 code implementation • COLING 2022 • Jennifer C. White, Ryan Cotterell
The ability to generalize compositionally is key to understanding the potentially infinite number of sentences that can be constructed in a human language from only a finite number of words.
1 code implementation • ACL 2021 • Jennifer C. White, Ryan Cotterell
Since language models are used to model a wide variety of languages, it is natural to ask whether the neural architectures used for the task have inductive biases towards modeling particular types of languages.
no code implementations • NAACL 2021 • Jennifer C. White, Tiago Pimentel, Naomi Saphra, Ryan Cotterell
Probes are models devised to investigate the encoding of knowledge -- e. g. syntactic structure -- in contextual representations.