Efficient detection of adversarial images

In this paper, detection of deception attack on deep neural network (DNN) based image classification in autonomous and cyber-physical systems is considered. Several studies have shown the vulnerability of DNN to malicious deception attacks. In such attacks, some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to the human eye but significant enough for a DNN-based classifier to misclassify it. This paper first proposes a novel pre-processing technique that facilitates the detection of such modified images under any DNN-based image classifier as well as the attacker model. The proposed pre-processing algorithm involves a certain combination of principal component analysis (PCA)-based decomposition of the image, and random perturbation based detection to reduce computational complexity. Next, an adaptive version of this algorithm is proposed where a random number of perturbations are chosen adaptively using a doubly-threshold policy, and the threshold values are learnt via stochastic approximation in order to minimize the expected number of perturbations subject to constraints on the false alarm and missed detection probabilities. Numerical experiments show that the proposed detection scheme outperforms a competing algorithm while achieving reasonably low computational complexity.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here