Paper

Geometry-Inspired Top-k Adversarial Perturbations

The brittleness of deep image classifiers to small adversarial input perturbations has been extensively studied in the last several years. However, the main objective of existing perturbations is primarily limited to change the correctly predicted Top-1 class by an incorrect one, which does not intend to change the Top-k prediction. In many digital real-world scenarios Top-k prediction is more relevant. In this work, we propose a fast and accurate method of computing Top-k adversarial examples as a simple multi-objective optimization. We demonstrate its efficacy and performance by comparing it to other adversarial example crafting techniques. Moreover, based on this method, we propose Top-k Universal Adversarial Perturbations, image-agnostic tiny perturbations that cause the true class to be absent among the Top-k prediction for the majority of natural images. We experimentally show that our approach outperforms baseline methods and even improves existing techniques of finding Universal Adversarial Perturbations.

Results in Papers With Code
(↓ scroll down to see all results)