no code implementations • 22 Dec 2021 • Christian Oliva, Luis F. Lago-Fernández
Experiments carried out on different problems show that the addition of this kind of connection to a recurrent network always improves the results, regardless of the architecture and training-specific details.
no code implementations • 17 Jun 2021 • Alejandro Cabana, Luis F. Lago-Fernández
We introduce a new technique for gradient normalization during neural network training.
no code implementations • 18 Jun 2020 • Christian Oliva, Luis F. Lago-Fernández
We provide an empirical study of the stability of recurrent neural networks trained to recognize regular languages.
no code implementations • 17 May 2020 • Christian Oliva, Luis F. Lago-Fernández
The resulting models are simpler, easier to interpret and get higher accuracy on different sample problems, including the recognition of regular languages, the computation of additions in different bases and the generation of arithmetic expressions.