Safe and Near-Optimal Policy Learning for Model Predictive Control using Primal-Dual Neural Networks

19 Jun 2019  ·  Xiaojing Zhang, Monimoy Bujarbaruah, Francesco Borrelli ·

In this paper, we propose a novel framework for approximating the explicit MPC law for linear parameter-varying systems using supervised learning. In contrast to most existing approaches, we not only learn the control policy, but also a "certificate policy", that allows us to estimate the sub-optimality of the learned control policy online, during execution-time. We learn both these policies from data using supervised learning techniques, and also provide a randomized method that allows us to guarantee the quality of each learned policy, measured in terms of feasibility and optimality. This in turn allows us to bound the probability of the learned control policy of being infeasible or suboptimal, where the check is performed by the certificate policy. Since our algorithm does not require the solution of an optimization problem during run-time, it can be deployed even on resource-constrained systems. We illustrate the efficacy of the proposed framework on a vehicle dynamics control problem where we demonstrate a speedup of up to two orders of magnitude compared to online optimization with minimal performance degradation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here