60 papers with code · Adversarial

No evaluation results yet. Help compare methods by submit evaluation metrics.

# Technical Report on the CleverHans v2.1.0 Adversarial Examples Library

3 Oct 2016openai/cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both

3,868

# Foolbox: A Python toolbox to benchmark the robustness of machine learning models

13 Jul 2017bethgelab/foolbox

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

1,014

638

# Towards Evaluating the Robustness of Neural Networks

16 Aug 2016carlini/nn_robust_attacks

Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0. 5\%$.

341

220

# Provable defenses against adversarial examples via the convex outer adversarial polytope

We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.

192

# Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks.

133

Correctly evaluating defenses against adversarial examples has proven to be extremely difficult.

131

# Boosting Adversarial Attacks with Momentum

To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.

110