|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We study randomized smoothing as a way to both improve performance on unperturbed data as well as increase robustness to adversarial attacks.
SOTA for Adversarial Defense on CIFAR-10
Natural images are virtually surrounded by low-density misclassified regions that can be efficiently discovered by gradient-guided search --- enabling the generation of adversarial images.
With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been applied as dominant tools to various machine learning domains.
Recent works show that deep neural networks trained on image classification dataset bias towards textures.
In this paper, we employ adversarial training to improve the performance of randomized smoothing.
Adversarial examples have received a great deal of recent attention because of their potential to uncover security flaws in machine learning systems.
Research on adversarial examples in computer vision tasks has shown that small, often imperceptible changes to an image can induce misclassification, which has security implications for a wide range of image processing systems.
In this work we revisit gradient regularization for adversarial robustness with some new ingredients.
In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks.