Exploiting the Sensitivity of $L_2$ Adversarial Examples to Erase-and-Restore

1 Jan 2020  ·  Fei Zuo, Qiang Zeng ·

By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-network-based image classifiers. $L_2$ adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW-$L_2$ AEs is still an open question. We find that, by randomly erasing some pixels in an $L_2$ AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of $L_2$ attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of $L_2$ AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW-$L_2$ AEs can accurately detect AEs generated using another $L_2$ attack method. More importantly, our approach demonstrates strong resilience to adaptive $L_2$ attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods