Towards Transferable Adversarial Perturbations with Minimum Norm

Transfer-based adversarial example is one of the most important classes of black-box attacks. Prior work in this direction often requires a fixed but large perturbation radius to reach a good transfer success rate. In this work, we propose a \emph{geometry-aware framework} to generate transferable adversarial perturbation with minimum norm for each input. Analogous to model selection in statistical machine learning, we leverage a validation model to select the optimal perturbation budget for each image. Extensive experiments verify the effectiveness of our framework on improving image quality of the crafted adversarial examples. The methodology is the foundation of our entry to the CVPR'21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet, in which we ranked 1st place out of 1,559 teams and surpassed the runner-up submissions by 4.59\% and 23.91\% in terms of final score and average image quality level, respectively.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here