Search Results for author: James A. Michaelov

Found 10 papers, 3 papers with code

Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models

no code implementations15 Nov 2023 James A. Michaelov, Catherine Arnett, Tyler A. Chang, Benjamin K. Bergen

We measure crosslingual structural priming in large language models, comparing model behavior to human experimental results from eight crosslingual experiments covering six languages, and four monolingual structural priming experiments in three non-English languages.

Sentence

Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models

no code implementations11 Oct 2023 Catherine Arnett, Tyler A. Chang, James A. Michaelov, Benjamin K. Bergen

Do multilingual language models share abstract grammatical representations across languages, and if so, when do these develop?

Language Modelling

Emergent inabilities? Inverse scaling over the course of pretraining

no code implementations24 May 2023 James A. Michaelov, Benjamin K. Bergen

Does inverse scaling only occur as a function of model parameter size, or can it also occur over the course of training?

Language Modelling Math

Can Peanuts Fall in Love with Distributional Semantics?

no code implementations20 Jan 2023 James A. Michaelov, Seana Coulson, Benjamin K. Bergen

Context changes expectations about upcoming words - following a story involving an anthropomorphic peanut, comprehenders expect the sentence the peanut was in love more than the peanut was salted, as indexed by N400 amplitude (Nieuwland & van Berkum, 2006).

Sentence

Collateral facilitation in humans and language models

1 code implementation9 Nov 2022 James A. Michaelov, Benjamin K. Bergen

Are the predictions of humans and language models affected by similar things?

XLM-R

So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements

no code implementations2 Sep 2021 James A. Michaelov, Seana Coulson, Benjamin K. Bergen

In this study, we investigate whether the linguistic predictions of computational language models or humans better reflect the way in which natural language stimuli modulate the amplitude of the N400.

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

no code implementations20 Jul 2021 James A. Michaelov, Megan D. Bardolph, Seana Coulson, Benjamin K. Bergen

Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks.

How well does surprisal explain N400 amplitude under different experimental conditions?

1 code implementation9 Oct 2020 James A. Michaelov, Benjamin K. Bergen

We investigate the extent to which word surprisal can be used to predict a neural measure of human language processing difficulty - the N400.

Cannot find the paper you are looking for? You can Submit a new open access paper.