Adversarial Robustness

400 papers with code • 6 benchmarks • 7 datasets

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Libraries

Use these libraries to find Adversarial Robustness models and implementations

Most implemented papers

Towards Deep Learning Models Resistant to Adversarial Attacks

MadryLab/mnist_challenge ICLR 2018

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

Generating Adversarial Examples with Adversarial Networks

MadryLab/mnist_challenge ICLR 2018

A challenge to explore adversarial robustness of neural networks on MNIST.

Robustness May Be at Odds with Accuracy

louis2889184/pytorch-adversarial-training ICLR 2019

We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.

Theoretically Principled Trade-off between Robustness and Accuracy

yaodongyu/TRADES 24 Jan 2019

We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples.

Certified Adversarial Robustness via Randomized Smoothing

locuslab/smoothing 8 Feb 2019

We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm.

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

fra31/auto-attack ICML 2020

The field of defense strategies against adversarial attacks has significantly grown over the last years, but progress is hampered as the evaluation of adversarial defenses is often insufficient and thus gives a wrong impression of robustness.

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

ysharma1126/EAD-Attack 13 Sep 2017

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

Improving Adversarial Robustness via Promoting Ensemble Diversity

P2333/Adaptive-Diversity-Promoting 25 Jan 2019

Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks.

Adversarial Robustness Toolbox v1.0.0

IBM/adversarial-robustness-toolbox 3 Jul 2018

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.

Adversarial Robustness as a Prior for Learned Representations

MadryLab/robustness 3 Jun 2019

In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks.