Scratch that! An Evolution-based Adversarial Attack against Neural Networks

5 Dec 2019Malhar JereLoris RossiBriland HitajGabriela CiocarlieGiacomo BoracchiFarinaz Koushanfar

We study black-box adversarial attacks for image classifiers in a constrained threat model, where adversaries can only modify a small fraction of pixels in the form of scratches on an image. We show that it is possible for adversaries to generate localized \textit{adversarial scratches} that cover less than $5\%$ of the pixels in an image and achieve targeted success rates of $98.77\%$ and $97.20\%$ on ImageNet and CIFAR-10 trained ResNet-50 models, respectively... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper