Search Results for author: Luis F. Lago-Fernández

Found 4 papers, 0 papers with code

The Importance of the Current Input in Sequence Modeling

no code implementations22 Dec 2021 Christian Oliva, Luis F. Lago-Fernández

Experiments carried out on different problems show that the addition of this kind of connection to a recurrent network always improves the results, regardless of the architecture and training-specific details.

Language Modelling

Backward Gradient Normalization in Deep Neural Networks

no code implementations17 Jun 2021 Alejandro Cabana, Luis F. Lago-Fernández

We introduce a new technique for gradient normalization during neural network training.

Stability of Internal States in Recurrent Neural Networks Trained on Regular Languages

no code implementations18 Jun 2020 Christian Oliva, Luis F. Lago-Fernández

We provide an empirical study of the stability of recurrent neural networks trained to recognize regular languages.

Separation of Memory and Processing in Dual Recurrent Neural Networks

no code implementations17 May 2020 Christian Oliva, Luis F. Lago-Fernández

The resulting models are simpler, easier to interpret and get higher accuracy on different sample problems, including the recognition of regular languages, the computation of additions in different bases and the generation of arithmetic expressions.

Cannot find the paper you are looking for? You can Submit a new open access paper.