Search Results for author: Luis A. Aguirre

Found 6 papers, 5 papers with code

Functional observability and target state estimation in large-scale networks

1 code implementation18 Jan 2022 Arthur N. Montanari, Chao Duan, Luis A. Aguirre, Adilson E. Motter

The quantitative understanding and precise control of complex dynamical systems can only be achieved by observing their internal states via measurement and/or estimation.

Identification of NARX Models for Compensation Design

no code implementations19 Nov 2020 Lucas A. Tavares, Petrus E. O. G. B. Abreu, Luis A. Aguirre

Finally, the experimental example is a pneumatic valve that presents a variety of nonlinearities, including hysteresis.

Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness

1 code implementation20 Jun 2019 Antônio H. Ribeiro, Koen Tiels, Luis A. Aguirre, Thomas B. Schön

The exploding and vanishing gradient problem has been the major conceptual principle behind most architecture and training improvements in recurrent neural networks (RNNs) during the last decade.

On the smoothness of nonlinear system identification

1 code implementation2 May 2019 Antônio H. Ribeiro, Koen Tiels, Jack Umenberger, Thomas B. Schön, Luis A. Aguirre

We shed new light on the \textit{smoothness} of optimization problems arising in prediction error parameter estimation of linear and nonlinear systems.

Lasso Regularization Paths for NARMAX Models via Coordinate Descent

1 code implementation2 Oct 2017 Antônio H. Ribeiro, Luis A. Aguirre

We propose a new algorithm for estimating NARMAX models with $L_1$ regularization for models represented as a linear combination of basis functions.

Computational Efficiency

"Parallel Training Considered Harmful?": Comparing series-parallel and parallel feedforward network training

1 code implementation21 Jun 2017 Antônio H. Ribeiro, Luis A. Aguirre

Neural network models for dynamic systems can be trained either in parallel or in series-parallel configurations.

Cannot find the paper you are looking for? You can Submit a new open access paper.