Search Results for author: Christophe Pallier

Found 8 papers, 1 papers with code

Probing Brain Context-Sensitivity with Masked-Attention Generation

no code implementations23 May 2023 Alexandre Pasquiou, Yair Lakretz, Bertrand Thirion, Christophe Pallier

Two fundamental questions in neurolinguistics concerns the brain regions that integrate information beyond the lexical level, and the size of their window of integration.

Word Embeddings

Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context

1 code implementation28 Feb 2023 Alexandre Pasquiou, Yair Lakretz, Bertrand Thirion, Christophe Pallier

A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing).

Language Modelling Sentence

Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps

no code implementations7 Jul 2022 Alexandre Pasquiou, Yair Lakretz, John Hale, Bertrand Thirion, Christophe Pallier

Neural Language Models (NLMs) have made tremendous advances during the last years, achieving impressive performance on various linguistic tasks.

Language Modelling

Toward a realistic model of speech processing in the brain with self-supervised learning

no code implementations3 Jun 2022 Juliette Millet, Charlotte Caucheteux, Pierre Orhan, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, Jean-Remi King

These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.

Language Acquisition Self-Supervised Learning

Variable beam search for generative neural parsing and its relevance for the analysis of neuro-imaging signal

no code implementations IJCNLP 2019 Benoit Crabb{\'e}, Murielle Fabre, Christophe Pallier

This paper describes a method of variable beam size inference for Recurrent Neural Network Grammar (rnng) by drawing inspiration from sequential Monte-Carlo methods such as particle filtering.

Entropy Reduction correlates with temporal lobe activity

no code implementations WS 2017 Matthew Nelson, Stanislas Dehaene, Christophe Pallier, John Hale

Using the Entropy Reduction incremental complexity metric, we relate high gamma power signals from the brains of epileptic patients to incremental stages of syntactic analysis in English and French.

Cannot find the paper you are looking for? You can Submit a new open access paper.