Search Results for author: Mariya Toneva

Found 7 papers, 5 papers with code

Same Cause; Different Effects in the Brain

1 code implementation21 Feb 2022 Mariya Toneva, Jennifer Williams, Anand Bollu, Christoph Dann, Leila Wehbe

It is then natural to ask: "Is the activity in these different brain zones caused by the stimulus properties in the same way?"

A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models

no code implementations23 Aug 2021 Peer Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, Valentina Borghesani

Real-world generalization, e. g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences.

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

no code implementations29 Jan 2021 Mostafa Abdou, Ana Valeria Gonzalez, Mariya Toneva, Daniel Hershcovich, Anders Søgaard

We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms.

Natural Language Processing

Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

1 code implementation NeurIPS 2020 Mariya Toneva, Otilia Stretcu, Barnabas Poczos, Leila Wehbe, Tom M. Mitchell

These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.

Inducing brain-relevant bias in natural language processing models

1 code implementation NeurIPS 2019 Dan Schwartz, Mariya Toneva, Leila Wehbe

Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain.

Language Modelling Natural Language Processing

An Empirical Study of Example Forgetting during Deep Neural Network Learning

2 code implementations ICLR 2019 Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon

Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks.

Benchmark General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.