Learning Performance Bounds for Safety-Critical Systems

9 Sep 2021  ·  Prithvi Akella, Ugo Rosolia, Aaron D. Ames ·

As the complexity of control systems increases, the need for systematic methods to guarantee their efficacy grows as well. However, direct testing of these systems is oftentimes costly, difficult, or impractical. As a result, the test and evaluation ideal would be to verify the efficacy of a system simulator and use this verification result to make a statement on true system performance. This paper formalizes that performance translation for a specific class of desired system behaviors. In that vein, our contribution is twofold. First, we detail a variant on existing Bayesian Optimization Algorithms that identifies minimal upper bounds to maximization problems, with some minimum probability. Second, we use this Algorithm to $i)$ lower bound the minimum simulator robustness and $ii)$ upper bound the expected deviance between true and simulated systems. Then, for the specific class of desired behaviors studied, we leverage these bounds to lower bound the minimum true system robustness, without directly testing the true system. Finally, we compare a high-fidelity ROS simulator of a Segway, with a significantly noisier version of itself, and show that our probabilistic verification bounds are indeed satisfied.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here