Releasing Inequality Phenomena in $L_{\infty}$-Adversarial Training via Input Gradient Distillation

16 May 2023  ·  Junxi Chen, Junhao Dong, Xiaohua Xie ·

Since adversarial examples appeared and showed the catastrophic degradation they brought to DNN, many adversarial defense methods have been devised, among which adversarial training is considered the most effective. However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i.i.d. noise or occluded. In this paper, we propose a simple yet effective method called Input Gradient Distillation (IGD) to release the inequality phenomena in $l_{\infty}$-adversarial training. Experiments show that while preserving the model's adversarial robustness, compared to PGDAT, IGD decreases the $l_{\infty}$-adversarially trained model's error rate to inductive noise and inductive occlusion by up to 60\% and 16.53\%, and to noisy images in Imagenet-C by up to 21.11\%. Moreover, we formally explain why the equality of the model's saliency map can improve such robustness.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here