# Robust classification

57 papers with code • 2 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

# Towards Deep Learning Models Resistant to Adversarial Attacks

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

45

# Certified Adversarial Robustness via Randomized Smoothing

8 Feb 2019

We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm.

7

# Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks

19 Nov 2015

Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model.

4

# Unlabeled Data Improves Adversarial Robustness

We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning.

4

# Denoised Smoothing: A Provable Defense for Pretrained Classifiers

We present a method for provably defending any pretrained image classifier against $\ell_p$ adversarial attacks.

4

# Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels

4 May 2017

To highlight, RP with a CNN classifier can predict if an MNIST digit is a "one"or "not" with only 0. 25% error, and 0. 46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples.

3

# Towards the first adversarially robust neural network model on MNIST

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

3

# Label-Noise Robust Generative Adversarial Networks

To remedy this, we propose a novel family of GANs called label-noise robust GANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even when training labels are noisy.

3

# Implicit Generation and Generalization in Energy-Based Models

20 Mar 2019

Energy based models (EBMs) are appealing due to their generality and simplicity in likelihood modeling, but have been traditionally difficult to train.

3

# SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines

14 Nov 2019

Following these guidelines, we design our Fully Convolutional Siamese tracker++ (SiamFC++) by introducing both classification and target state estimation branch(G1), classification score without ambiguity(G2), tracking without prior knowledge(G3), and estimation quality score(G4).

3