87 papers with code · Adversarial

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

# Technical Report on the CleverHans v2.1.0 Adversarial Examples Library

3 Oct 2016openai/cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both

4,374

# The Limitations of Deep Learning in Adversarial Settings

24 Nov 2015openai/cleverhans

In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.

4,374

# Foolbox: A Python toolbox to benchmark the robustness of machine learning models

13 Jul 2017bethgelab/foolbox

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

1,323

# Adversarial Examples on Graph Data: Deep Insights into Attack and Defense

5 Mar 2019stellargraph/stellargraph

Based on this observation, we propose a defense approach which inspects the graph and recovers the potential adversarial perturbations.

900

# Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

711

SOTA for Adversarial Attack on 1B Words (using extra training data)

482

# Towards Evaluating the Robustness of Neural Networks

16 Aug 2016carlini/nn_robust_attacks

Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0. 5\%$.

425

We curate 7, 500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A.

246

# Provable defenses against adversarial examples via the convex outer adversarial polytope

We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.

230