Adversarial Defense
163 papers with code • 9 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsMost implemented papers
Towards Deep Learning Models Resistant to Adversarial Attacks
Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.
The Limitations of Deep Learning in Adversarial Settings
In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Certified Adversarial Robustness via Randomized Smoothing
We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm.
Theoretically Principled Trade-off between Robustness and Accuracy
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples.
Adversarial Training for Free!
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.
AOGNets: Compositional Grammatical Architectures for Deep Learning
This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs.
Certified Defenses against Adversarial Examples
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs.