Network and Distributed System Security Symposium 2018

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

Network and Distributed System Security Symposium 2018 mzweilin/EvadeML-Zoo

Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by \emph{adversarial examples} that are generated by adding small but purposeful distortions to natural examples.

ADVERSARIAL DEFENSE