Mitigating Adversarial Effects Through Randomization

ICLR 2018 Cihang Xie • Jianyu Wang • Zhishuai Zhang • Zhou Ren • Alan Yuille

Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.