Boosting Adversarial Attacks with Momentum

CVPR 2018 Yinpeng Dong • Fangzhou Liao • Tianyu Pang • Hang Su • Jun Zhu • Xiaolin Hu • Jianguo Li

However, most of existing adversarial attacks can only fool a black-box model with a low success rate. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.