Browse > Adversarial > Adversarial Attack

Adversarial Attack

65 papers with code · Adversarial

State-of-the-art leaderboards

Latest papers with code

MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks

6 Aug 2019sharpstill/MetaAdvDet

To solve such few-shot problem with the evolving attack, we propose a meta-learning based robust detection method to detect new adversarial attacks with limited examples.

ADVERSARIAL ATTACK META-LEARNING

0
06 Aug 2019

Natural Adversarial Examples

16 Jul 2019hendrycks/natural-adv-examples

We curate 7, 500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A.

ADVERSARIAL ATTACK

182
16 Jul 2019

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

3 Jul 2019fra31/fab-attack

The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as the methods for the exact computation, even when available, do not scale to large networks.

ADVERSARIAL ATTACK

13
03 Jul 2019

Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency

ACL 2019 JHL-HUST/PWWS

Experiments on three popular datasets using convolutional as well as LSTM models show that PWWS reduces the classification accuracy to the most extent, and keeps a very low word substitution rate.

ADVERSARIAL ATTACK IMAGE CLASSIFICATION SEMANTIC TEXTUAL SIMILARITY TEXT CLASSIFICATION

3
01 Jul 2019

Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

9 Jun 2019Hadisalman/smoothing-adversarial

Recent works have shown the effectiveness of randomized smoothing as a scalable technique for building neural network-based classifiers that are provably robust to $\ell_2$-norm adversarial perturbations.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

82
09 Jun 2019

Adversarial Examples for Non-Parametric Methods: Attacks, Defenses and Large Sample Limits

7 Jun 2019yangarbiter/adversarial-nonparametrics

Adversarial examples have received a great deal of recent attention because of their potential to uncover security flaws in machine learning systems.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

10
07 Jun 2019

Efficient Project Gradient Descent for Ensemble Adversarial Attack

7 Jun 2019wufanyou/EPGD

Consider $l_2$ norms attacks, Project Gradient Descent (PGD) and the Carlini and Wagner (C\&W) attacks are the two main methods, where PGD control max perturbation for adversarial examples while C\&W approach treats perturbation as a regularization term optimized it with loss function together.

ADVERSARIAL ATTACK

2
07 Jun 2019

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

CVPR 2019 jeromerony/fast_adversarial

Research on adversarial examples in computer vision tasks has shown that small, often imperceptible changes to an image can induce misclassification, which has security implications for a wide range of image processing systems.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

73
01 Jun 2019

Scaleable input gradient regularization for adversarial robustness

27 May 2019cfinlay/tulip

Input gradient regularization is not thought to be an effective means for promoting adversarial robustness.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE

12
27 May 2019

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

27 May 2019anonymousjack/hijacking

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

ADVERSARIAL ATTACK AUTONOMOUS DRIVING MULTIPLE OBJECT TRACKING OBJECT DETECTION

4
27 May 2019