Browse > Adversarial > Adversarial Attack

Adversarial Attack

87 papers with code · Adversarial

Leaderboards

Greatest papers with code

Technical Report on the CleverHans v2.1.0 Adversarial Examples Library

3 Oct 2016openai/cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

The Limitations of Deep Learning in Adversarial Settings

24 Nov 2015openai/cleverhans

In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

13 Jul 2017bethgelab/foolbox

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

ADVERSARIAL ATTACK

Adversarial Examples on Graph Data: Deep Insights into Attack and Defense

5 Mar 2019stellargraph/stellargraph

Based on this observation, we propose a defense approach which inspects the graph and recovers the potential adversarial perturbations.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

ICML 2018 anishathalye/obfuscated-gradients

We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

20 Feb 2019BorealisAI/advertorch

advertorch is a toolbox for adversarial robustness research.

 SOTA for Adversarial Attack on 1B Words (using extra training data)

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

Towards Evaluating the Robustness of Neural Networks

16 Aug 2016carlini/nn_robust_attacks

Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0. 5\%$.

ADVERSARIAL ATTACK

Provable defenses against adversarial examples via the convex outer adversarial polytope

ICML 2018 locuslab/convex_adversarial

We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.

ADVERSARIAL ATTACK

AdvHat: Real-world adversarial attack on ArcFace Face ID system

23 Aug 2019papermsucode/advhat

In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions.

ADVERSARIAL ATTACK

Natural Adversarial Examples

16 Jul 2019hendrycks/natural-adv-examples

We curate 7, 500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A.

ADVERSARIAL ATTACK