Adversarial Defense via Local Flatness Regularization

27 Oct 2019  ·  Jia Xu, Yiming Li, Yong Jiang, Shu-Tao Xia ·

Adversarial defense is a popular and important research area. Due to its intrinsic mechanism, one of the most straightforward and effective ways of defending attacks is to analyze the property of loss surface in the input space. In this paper, we define the local flatness of the loss surface as the maximum value of the chosen norm of the gradient regarding to the input within a neighborhood centered on the benign sample, and discuss the relationship between the local flatness and adversarial vulnerability. Based on the analysis, we propose a novel defense approach via regularizing the local flatness, dubbed local flatness regularization (LFR). We also demonstrate the effectiveness of the proposed method from other perspectives, such as human visual mechanism, and analyze the relationship between LFR and other related methods theoretically. Experiments are conducted to verify our theory and demonstrate the superiority of the proposed method.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here