Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

ICCV 2019 Aamir MustafaSalman KhanMunawar HayatRoland GoeckeJianbing ShenLing Shao

Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images. The robustness of existing defenses suffers greatly under white-box attack settings, where an adversary has full knowledge about the network and can iterate several times to find strong perturbations... (read more)

PDF Abstract

Evaluation Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK COMPARE
Adversarial Defense CIFAR-10 PCL (against PGD, white box) Accuracy 46.7 # 1