Robust Policy Optimization in Continuous-time Mixed $\mathcal{H}_2/\mathcal{H}_\infty$ Stochastic Control

9 Sep 2022  ·  Leilei Cui, Lekan Molu ·

Following the recent resurgence in establishing linear control theoretic benchmarks for reinforcement leaning (RL)-based policy optimization (PO) for complex dynamical systems with continuous state and action spaces, an optimal control problem for a continuous-time infinite-dimensional linear stochastic system possessing additive Brownian motion is optimized on a cost that is an exponent of the quadratic form of the state, input, and disturbance terms. We lay out a model-based and model-free algorithm for RL-based stochastic PO. For the model-based algorithm, we establish rigorous convergence guarantees. For the sampling-based algorithm, over trajectory arcs that emanate from the phase space, we find that the Hamilton-Jacobi Bellman equation parameterizes trajectory costs -- resulting in a discrete-time (input and state-based) sampling scheme accompanied by unknown nonlinear dynamics with continuous-time policy iterates. The need for known dynamics operators is circumvented and we arrive at a reinforced PO algorithm (via policy iteration) where an upper bound on the $\mathcal{H}_2$ norm is minimized (to guarantee stability) and a robustness metric is enforced by maximizing the cost with respect to a controller that includes the level of noise attenuation specified by the system's $H_\infty$ norm. Rigorous robustness analyses is prescribed in an input-to-state stability formalism. Our analyses and contributions are distinguished by many natural systems characterized by additive Wiener process, amenable to \^Ito's stochastic differential calculus in dynamic game settings.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here