Sensitive Samples Revisited: Detecting Neural Network Attacks Using Constraint Solvers

7 Sep 2021  ·  Amel Nestor Docena, Thomas Wahl, Trevor Pearce, Yunsi Fei ·

Neural Networks are used today in numerous security- and safety-relevant domains and are, as such, a popular target of attacks that subvert their classification capabilities, by manipulating the network parameters. Prior work has introduced sensitive samples -- inputs highly sensitive to parameter changes -- to detect such manipulations, and proposed a gradient ascent-based approach to compute them. In this paper we offer an alternative, using symbolic constraint solvers. We model the network and a formal specification of a sensitive sample in the language of the solver and ask for a solution. This approach supports a rich class of queries, corresponding, for instance, to the presence of certain types of attacks. Unlike earlier techniques, our approach does not depend on convex search domains, or on the suitability of a starting point for the search. We address the performance limitations of constraint solvers by partitioning the search space for the solver, and exploring the partitions according to a balanced schedule that still retains completeness of the search. We demonstrate the impact of the use of solvers in terms of functionality and search efficiency, using a case study for the detection of Trojan attacks on Neural Networks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here