Boosting Adversarial Attacks with Momentum

CVPR 2018 Yinpeng DongFangzhou LiaoTianyu PangHang SuJun ZhuXiaolin HuJianguo Li

Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed... (read more)

PDF Abstract

Evaluation results from the paper


  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.