Enhancing the Transferability of Adversarial Attacks via Scale Ensemble

29 Sep 2021  ·  Xianfeng Gao, Zhikai Chen, Bo Zhang ·

There is a line of works on adversarial example generation in computer vision, which makes deep learning suffers a lot. Driven by the transferability decrease among models with different input sizes, we present a novel attack method by using a scale input ensemble framework to enhance the transferability of adversarial images, which is named Scale Ensemble Method(SEM). Our method can preserve the characteristic textures of the original image via zooming the surrogate model's input in and out in a specific sequence during generating adversarial examples. The superior texture feature highlights the important attacking region and increases the diversity of adversarial perturbations for assisting a more aggressive attack. The experiments on ImageNet show that our method successfully mitigates the gap of transferability between models with different input sizes and achieves about 8% higher success rate comparing with the state-of-the-art input transformation methods. And we also demonstrate that our method can integrate with existing methods and bypass a variety of defense methods with over 90% success rate.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here