Towards Scaling Robustness Verification of Semantic Features via Proof Velocity

29 Sep 2021  ·  Anan Kabaha, Dana Drachsler Cohen ·

Robustness analysis is important for understanding the reliability of neural networks. Despite the significant progress in the verification techniques for both $L_p$- and semantic features- neighborhoods, existing approaches struggle to scale to deep networks and large datasets. For example, we are unaware of any analyzer that scales to AlexNet trained for ImageNet (consisting of 224x224x3 images). In this work, we take a step towards scaling robustness analysis. We focus on robustness to perturbations of semantic features and introduce the concept of proof guided by velocity to scale the analysis. The key idea is to phrase the verification task as a dynamic system and adaptively identify how to split it into subproblems each with maximal proof velocity. We propose a policy to determine the next subproblem based on the past and by leveraging input splitting, input refinement, and bound tightening. We evaluate our approach on CIFAR-10 and ImageNet and show that it can analyze neighborhoods of various features: hue, saturation, lightness, brightness, and PCA.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods