Defensive Distillation is Not Robust to Adversarial Examples

14 Jul 2016  ·  Nicholas Carlini, David Wagner ·

We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here