Search Results for author: Mariya Toneva

Found 16 papers, 6 papers with code

Vision-Language Integration in Multimodal Video Transformers (Partially) Aligns with the Brain

no code implementations13 Nov 2023 Dota Tianai Dong, Mariya Toneva

Using brain recordings of participants watching a popular TV show, we analyze the effects of multi-modal connections and interactions in a pre-trained multi-modal video transformer on the alignment with uni- and multi-modal brain regions.

Speech language models lack important brain-relevant semantics

no code implementations8 Nov 2023 Subba Reddy Oota, Emin Çelik, Fatma Deniz, Mariya Toneva

We investigate this question via a direct approach, in which we eliminate information related to specific low-level stimulus features (textual, speech, and visual) in the language model representations, and observe how this intervention affects the alignment with fMRI brain recordings acquired while participants read versus listened to the same naturalistic stories.

Language Modelling

Perturbed examples reveal invariances shared by language models

no code implementations7 Nov 2023 Ruchit Rawal, Mariya Toneva

Possessing a wide variety of invariances may be a key reason for the recent successes of large language models, and our framework can shed light on the types of invariances that are retained by or emerge in new models.

What Happens During Finetuning of Vision Transformers: An Invariance Based Investigation

no code implementations12 Jul 2023 Gabriele Merlin, Vedant Nanda, Ruchit Rawal, Mariya Toneva

The pretrain-finetune paradigm usually improves downstream performance over training a model from scratch on the same task, becoming commonplace across many areas of machine learning.

Pointwise Representational Similarity

no code implementations30 May 2023 Camila Kolling, Till Speicher, Vedant Nanda, Mariya Toneva, Krishna P. Gummadi

Concretely, we show how PNKA can be leveraged to develop a deeper understanding of (a) the input examples that are likely to be misclassified, (b) the concepts encoded by (individual) neurons in a layer, and (c) the effects of fairness interventions on learned representations.


Training language models to summarize narratives improves brain alignment

2 code implementations21 Dec 2022 Khai Loong Aw, Mariya Toneva

We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity.

Language Modelling Open-Ended Question Answering

Language models and brain alignment: beyond word-level semantics and prediction

no code implementations1 Dec 2022 Gabriele Merlin, Mariya Toneva

The first perturbation is to improve the model's ability to predict the next word in the specific naturalistic stimulus text that the brain recordings correspond to.

Language Modelling

Same Cause; Different Effects in the Brain

1 code implementation21 Feb 2022 Mariya Toneva, Jennifer Williams, Anand Bollu, Christoph Dann, Leila Wehbe

It is then natural to ask: "Is the activity in these different brain zones caused by the stimulus properties in the same way?"

A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models

no code implementations23 Aug 2021 Peer Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, Valentina Borghesani

Real-world generalization, e. g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences.

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

no code implementations29 Jan 2021 Mostafa Abdou, Ana Valeria Gonzalez, Mariya Toneva, Daniel Hershcovich, Anders Søgaard

We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms.

Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

1 code implementation NeurIPS 2020 Mariya Toneva, Otilia Stretcu, Barnabas Poczos, Leila Wehbe, Tom M. Mitchell

These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.

Inducing brain-relevant bias in natural language processing models

1 code implementation NeurIPS 2019 Dan Schwartz, Mariya Toneva, Leila Wehbe

Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain.

Language Modelling

An Empirical Study of Example Forgetting during Deep Neural Network Learning

3 code implementations ICLR 2019 Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon

Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.