Improved Initialization of State-Space Artificial Neural Networks

26 Mar 2021  ·  Maarten Schoukens ·

The identification of black-box nonlinear state-space models requires a flexible representation of the state and output equation. Artificial neural networks have proven to provide such a representation. However, as in many identification problems, a nonlinear optimization problem needs to be solved to obtain the model parameters (layer weights and biases). A well-thought initialization of these model parameters can often avoid that the nonlinear optimization algorithm converges to a poorly performing local minimum of the considered cost function. This paper introduces an improved initialization approach for nonlinear state-space models represented as a recurrent artificial neural network and emphasizes the importance of including an explicit linear term in the model structure. Some of the neural network weights are initialized starting from a linear approximation of the nonlinear system, while others are initialized using random values or zeros. The effectiveness of the proposed initialization approach over previously proposed methods is illustrated on two benchmark examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here