Practical Black-Box Attacks against Machine Learning

8 Feb 2016Nicolas PapernotPatrick McDanielIan GoodfellowSomesh JhaZ. Berkay CelikAnanthram Swami

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.