1 code implementation • 30 Apr 2024 • James A. Michaelov, Catherine Arnett, Benjamin K. Bergen
Transformers have generally supplanted recurrent neural networks as the dominant architecture for both natural language processing tasks and for modelling the effect of predictability on online human language comprehension.
no code implementations • 15 Nov 2023 • James A. Michaelov, Catherine Arnett, Tyler A. Chang, Benjamin K. Bergen
We measure crosslingual structural priming in large language models, comparing model behavior to human experimental results from eight crosslingual experiments covering six languages, and four monolingual structural priming experiments in three non-English languages.
no code implementations • 11 Oct 2023 • Catherine Arnett, Tyler A. Chang, James A. Michaelov, Benjamin K. Bergen
Do multilingual language models share abstract grammatical representations across languages, and if so, when do these develop?
no code implementations • 24 May 2023 • James A. Michaelov, Benjamin K. Bergen
Does inverse scaling only occur as a function of model parameter size, or can it also occur over the course of training?
no code implementations • 20 Jan 2023 • James A. Michaelov, Seana Coulson, Benjamin K. Bergen
Context changes expectations about upcoming words - following a story involving an anthropomorphic peanut, comprehenders expect the sentence the peanut was in love more than the peanut was salted, as indexed by N400 amplitude (Nieuwland & van Berkum, 2006).
no code implementations • 16 Dec 2022 • James A. Michaelov, Benjamin K. Bergen
How well do language models deal with quantification?
1 code implementation • 9 Nov 2022 • James A. Michaelov, Benjamin K. Bergen
Are the predictions of humans and language models affected by similar things?
1 code implementation • COLING 2022 • James A. Michaelov, Benjamin K. Bergen
Some languages allow arguments to be omitted in certain contexts.
no code implementations • 2 Sep 2021 • James A. Michaelov, Seana Coulson, Benjamin K. Bergen
In this study, we investigate whether the linguistic predictions of computational language models or humans better reflect the way in which natural language stimuli modulate the amplitude of the N400.
no code implementations • 20 Jul 2021 • James A. Michaelov, Megan D. Bardolph, Seana Coulson, Benjamin K. Bergen
Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks.
1 code implementation • 9 Oct 2020 • James A. Michaelov, Benjamin K. Bergen
We investigate the extent to which word surprisal can be used to predict a neural measure of human language processing difficulty - the N400.