Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation

18 Aug 2019  ·  Yuh-Shyang Wang, Tsui-Wei Weng, Luca Daniel ·

Deep neural networks are known to be fragile to small adversarial perturbations. This issue becomes more critical when a neural network is interconnected with a physical system in a closed loop. In this paper, we show how to combine recent works on neural network certification tools (which are mainly used in static settings such as image classification) with robust control theory to certify a neural network policy in a control loop. Specifically, we give a sufficient condition and an algorithm to ensure that the closed loop state and control constraints are satisfied when the persistent adversarial perturbation is l-infinity norm bounded. Our method is based on finding a positively invariant set of the closed loop dynamical system, and thus we do not require the differentiability or the continuity of the neural network policy. Along with the verification result, we also develop an effective attack strategy for neural network control systems that outperforms exhaustive Monte-Carlo search significantly. We show that our certification algorithm works well on learned models and achieves 5 times better result than the traditional Lipschitz-based method to certify the robustness of a neural network policy on a cart pole control problem.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here