1 code implementation • CVPR 2018 • Javier S. Turek, Alexander Huth
Thus for large point sets it is common to use a low-rank approximation to the distance matrix, which fits in memory and can be efficiently analyzed using methods such as multidimensional scaling (MDS).
no code implementations • 6 Aug 2018 • Zaiwei Zhang, Zhenpei Yang, Chongyang Ma, Linjie Luo, Alexander Huth, Etienne Vouga, Qi-Xing Huang
We show a principled way to train this model by combining discriminator losses for both a 3D object arrangement representation and a 2D image-based representation.
no code implementations • NeurIPS 2018 • Shailee Jain, Alexander Huth
By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information.
1 code implementation • 1 May 2020 • Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth
Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.
no code implementations • NeurIPS 2020 • Shailee Jain, Vy Vo, Shivangi Mahto, Amanda LeBel, Javier S. Turek, Alexander Huth
To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs).
1 code implementation • NeurIPS 2021 • Richard Antonello, Javier Turek, Vy Vo, Alexander Huth
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
no code implementations • ACL 2021 • Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth
Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.