Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

1 Aug 2017Andrew P. NortonYanjun Qi

Recent studies have shown that attackers can force deep learning models to misclassify so-called "adversarial examples": maliciously generated images formed by making imperceptible modifications to pixel values. With growing interest in deep learning for security applications, it is important for security experts and users of machine learning to recognize how learning systems may be attacked... (read more)

PDF Abstract

Evaluation results from the paper


  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.