Search Results for author: Alexander Huth

Found 7 papers, 3 papers with code

Efficient, sparse representation of manifold distance matrices for classical scaling

1 code implementation CVPR 2018 Javier S. Turek, Alexander Huth

Thus for large point sets it is common to use a low-rank approximation to the distance matrix, which fits in memory and can be efficiently analyzed using methods such as multidimensional scaling (MDS).

Deep Generative Modeling for Scene Synthesis via Hybrid Representations

no code implementations6 Aug 2018 Zaiwei Zhang, Zhenpei Yang, Chongyang Ma, Linjie Luo, Alexander Huth, Etienne Vouga, Qi-Xing Huang

We show a principled way to train this model by combining discriminator losses for both a 3D object arrangement representation and a 2D image-based representation.

Incorporating Context into Language Encoding Models for fMRI

no code implementations NeurIPS 2018 Shailee Jain, Alexander Huth

By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information.

Language Modelling Word Embeddings

Selecting Informative Contexts Improves Language Model Finetuning

1 code implementation1 May 2020 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech

no code implementations NeurIPS 2020 Shailee Jain, Vy Vo, Shivangi Mahto, Amanda LeBel, Javier S. Turek, Alexander Huth

To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs).

Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses

1 code implementation NeurIPS 2021 Richard Antonello, Javier Turek, Vy Vo, Alexander Huth

We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.

Transfer Learning Translation +1

Selecting Informative Contexts Improves Language Model Fine-tuning

no code implementations ACL 2021 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.