Browse > Adversarial > Adversarial Attack

Adversarial Attack

50 papers with code · Adversarial

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Latest papers without code

An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack

ICLR 2019 Yang Zhang et al

There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations.

ADVERSARIAL ATTACK

01 May 2019

Second-Order Adversarial Attack and Certifiable Robustness

ICLR 2019 Bai Li et al

In this paper, we propose a powerful second-order attack method that reduces the accuracy of the defense model by Madry et al. (2017).

ADVERSARIAL ATTACK

01 May 2019

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

ICLR 2019 Kaidi Xu et al

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.

ADVERSARIAL ATTACK

01 May 2019

Defensive Quantization: When Efficiency Meets Robustness

ICLR 2019 Ji Lin et al

This paper aims to raise people's awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models.

ADVERSARIAL ATTACK QUANTIZATION

01 May 2019

A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

ICLR 2019 Jinghui Chen et al

Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack.

ADVERSARIAL ATTACK

01 May 2019

ADef: an Iterative Algorithm to Construct Adversarial Deformations

ICLR 2019 Rima Alaifari et al

While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.

ADVERSARIAL ATTACK

01 May 2019

Improving the Generalization of Adversarial Training with Domain Adaptation

ICLR 2019 Chuanbiao Song et al

Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.

ADVERSARIAL ATTACK DOMAIN ADAPTATION

01 May 2019

NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK

ICLR 2019 Yandong Li et al

In other words, there is a population of adversarial examples, instead of only one, for any input to a DNN.

ADVERSARIAL ATTACK

01 May 2019

CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild

ICLR 2019 Yang Zhang et al

In particular, we learn a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors.

ADVERSARIAL ATTACK

01 May 2019

Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping

23 Apr 2019Kostiantyn Khabarlak et al

The presented adversarial attack analysis and the idea of attack scoping can be easily expanded to different datasets, thus making the paper's results applicable to a wide range of practical tasks.

ADVERSARIAL ATTACK AUTONOMOUS VEHICLES FACE RECOGNITION

23 Apr 2019