Towards Deep Learning Models Resistant to Adversarial Attacks

ICLR 2018 Aleksander MadryAleksandar MakelovLudwig SchmidtDimitris TsiprasAdrian Vladu

Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples---inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models... (read more)

PDF Abstract

Evaluation Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.