Search Results for author: Nicole Beckage

Found 5 papers, 1 papers with code

Selecting Informative Contexts Improves Language Model Fine-tuning

no code implementations ACL 2021 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

no code implementations12 May 2021 Hsiang-Yun Sherry Chien, Javier S. Turek, Nicole Beckage, Vy A. Vo, Christopher J. Honey, Ted L. Willke

Altogether, we found that LSTM with the proposed forget gate can learn long-term dependencies, outperforming other recurrent networks in multiple domains; such gating mechanism can be integrated into other architectures for improving the learning of long timescale information in recurrent neural networks.

Image Classification Language Modelling

Selecting Informative Contexts Improves Language Model Finetuning

1 code implementation1 May 2020 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.