Browse > Adversarial > Adversarial Attack

Adversarial Attack

35 papers with code · Adversarial

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Latest papers with code

Trust Region Based Adversarial Attack on Neural Networks

16 Dec 2018amirgholami/trattack

Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently.

ADVERSARIAL ATTACK

16 Dec 2018

Learning Transferable Adversarial Examples via Ghost Networks

9 Dec 2018LiYingwei/ghost-network

They require a family of diverse models, and ensembling them afterward, both of which are computationally expensive. In particular, by re-producing the NIPS 2017 adversarial competition, our work outperforms the No.1 attack submission by a large margin, which demonstrates its effectiveness and efficiency.

ADVERSARIAL ATTACK

09 Dec 2018

Feature Denoising for Improving Adversarial Robustness

9 Dec 2018facebookresearch/ImageNet-Adversarial-Training

This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE IMAGE CLASSIFICATION

A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

ICLR 2019 uclaml/Frank-Wolfe-AdvML

Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates.

ADVERSARIAL ATTACK

27 Nov 2018

Injecting and removing malignant features in mammography with CycleGAN: Investigation of an automated adversarial attack using neural networks

19 Nov 2018BreastGAN/experiment1

$\textbf{Purpose}$ To train a cycle-consistent generative adversarial network (CycleGAN) on mammographic data to inject or remove features of malignancy, and to determine whether these AI-mediated attacks can be detected by radiologists. At the higher resolution, all radiologists showed significantly lower detection rate of cancer in the modified images (0.77-0.84 vs. 0.59-0.69, p=0.008), however, they were now able to reliably detect modified images due to better visibility of artifacts (0.92, 0.92 and 0.97).

ADVERSARIAL ATTACK

19 Nov 2018

Improved Network Robustness with Adversary Critic

NeurIPS 2018 aam-at/adversary_critic

Our main idea is: adversarial examples for the robust classifier should be indistinguishable from the regular data of the adversarial target. We formulate a problem of learning robust classifier in the framework of Generative Adversarial Networks (GAN), where the adversarial attack on classifier acts as a generator, and the critic network learns to distinguish between regular and adversarial images.

ADVERSARIAL ATTACK

30 Oct 2018

Improving the Generalization of Adversarial Training with Domain Adaptation

ICLR 2019 cxmscb/ATDA

In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain. Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.

ADVERSARIAL ATTACK DOMAIN ADAPTATION

01 Oct 2018

Efficient Formal Safety Analysis of Neural Networks

NeurIPS 2018 tcwangshiqi-columbia/Interval-Attack

Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain $L$-norm of a given image. Our approach can check different safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing analysis techniques.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE AUTONOMOUS DRIVING MALWARE DETECTION

19 Sep 2018

Second-Order Adversarial Attack and Certifiable Robustness

ICLR 2019 locuslab/smoothing

We propose a powerful second-order attack method that outperforms existing attack methods on reducing the accuracy of state-of-the-art defense models based on adversarial training. The effectiveness of our attack method motivates an investigation of provable robustness of a defense model.

ADVERSARIAL ATTACK

10 Sep 2018