Correctness Verification of Neural Networks

3 Jun 2019  ·  Yichen Yang, Martin Rinard ·

We present a novel framework for specifying and verifying correctness globally for neural networks on perception tasks. Most previous works on neural network verification for perception tasks focus on robustness verification. Unlike robustness verification, which aims to verify that the prediction of a network is stable in some local regions around labelled points, our framework provides a way to specify correctness globally in the whole target input space and verify that the network is correct for all target inputs (or find the regions where the network is not correct). We provide a specification through 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any inputs of interest. The presented framework also enables detecting illegal inputs -- inputs that are not contained in (or close to) the target input space as defined by the state space and observation process (the neural network is not designed to work on them), so that we can flag when we don't have guarantees. Results from two case studies highlight the ability of our technique to verify error bounds over the whole target input space and show how the error bounds vary over the state and input spaces.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here