65 papers with code · Adversarial

Trend Dataset Best Method Paper title Paper Code Compare

# DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation

14 Aug 2019

Nowadays, Deep learning techniques show dramatic performance on computer vision area, and they even outperform human.

# Investigating Decision Boundaries of Trained Neural Networks

7 Aug 2019

Through numerical results, we confirm that some of the speculations about the decision boundaries are accurate, some of the computational methods can be improved, and some of the simplifying assumptions may be unreliable, for models with nonlinear activation functions.

# Adversarial Self-Defense for Cycle-Consistent GANs

5 Aug 2019

The goal of unsupervised image-to-image translation is to map images from one domain to another without the ground truth correspondence between the two domains.

# The General Black-box Attack Method for Graph Neural Networks

4 Aug 2019

To this end, we begin by investigating the theoretical connections between different kinds of GNNs in a principled way and integrate different GNN models into a unified framework, dubbed as General Spectral Graph Convolution.

# Invariance-based Adversarial Attack on Neural Machine Translation Systems

3 Aug 2019

The proposed soft-attention based technique outperforms existing methods like HotFlip by a significant margin for all the conducted experiments The results demonstrate that state-of-the-art NMT systems are unable to capture the semantics of the source language.

# Adversarial Attack on Sentiment Classification

In this paper, we propose a white-box attack algorithm called {}Global Search{''} method and compare it with a simple misspelling noise and a more sophisticated and common white-box attack approach called {}Greedy Search{''}.

# Adversarial Attack on Sentiment Classification

In this paper, we propose a white-box attack algorithm called {}Global Search{''} method and compare it with a simple misspelling noise and a more sophisticated and common white-box attack approach called {}Greedy Search{''}.

# Black-box Adversarial ML Attack on Modulation Classification

1 Aug 2019

We have evaluated the robustness of two famous such modulation classifiers (based on the techniques of convolutional neural networks and long short term memory) against adversarial machine learning attacks in black-box settings.

# Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity

30 Jul 2019

To address these challenging drawbacks, in this paper, we propose a novel fast zeroth-order stochastic alternating direction method of multipliers (ADMM) method (\emph{i. e.}, ZO-SPIDER-ADMM) with lower function query complexity for solving nonconvex problems with multiple nonsmooth penalties.

# On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

26 Jul 2019

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.