Youla-REN: Learning Nonlinear Feedback Policies with Robust Stability Guarantees

2 Dec 2021  ·  Ruigang Wang, Ian R. Manchester ·

This paper presents a parameterization of nonlinear controllers for uncertain systems building on a recently developed neural network architecture, called the recurrent equilibrium network (REN), and a nonlinear version of the Youla parameterization. The proposed framework has "built-in" guarantees of stability, i.e., all policies in the search space result in a contracting (globally exponentially stable) closed-loop system. Thus, it requires very mild assumptions on the choice of cost function and the stability property can be generalized to unseen data. Another useful feature of this approach is that policies are parameterized directly without any constraints, which simplifies learning by a broad range of policy-learning methods based on unconstrained optimization (e.g. stochastic gradient descent). We illustrate the proposed approach with a variety of simulation examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here