|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Recent works have shown the effectiveness of randomized smoothing as a scalable technique for building neural network-based classifiers that are provably robust to $\ell_2$-norm adversarial perturbations.
Adversarial examples have received a great deal of recent attention because of their potential to uncover security flaws in machine learning systems.
Research on adversarial examples in computer vision tasks has shown that small, often imperceptible changes to an image can induce misclassification, which has security implications for a wide range of image processing systems.
We propose a simple change to the current neural network structure for defending against gradient-based adversarial attacks.
Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks.
In this paper, we show that adversarial training can be cast as a discrete time differential game.
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.
Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images.
SOTA for Adversarial Defense on CIFAR-10