Search Results for author: Richard Antonello

Found 5 papers, 4 papers with code

Scaling laws for language encoding models in fMRI

1 code implementation NeurIPS 2023 Richard Antonello, Aditya Vaidya, Alexander G. Huth

Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language.

Selecting Informative Contexts Improves Language Model Fine-tuning

no code implementations ACL 2021 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses

1 code implementation NeurIPS 2021 Richard Antonello, Javier Turek, Vy Vo, Alexander Huth

We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.

Transfer Learning Translation +1

Selecting Informative Contexts Improves Language Model Finetuning

1 code implementation1 May 2020 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.