Adversarial Defense

177 papers with code • 10 benchmarks • 5 datasets

Competitions with currently unpublished results:

Libraries

Use these libraries to find Adversarial Defense models and implementations

Most implemented papers

AOGNets: Compositional Grammatical Architectures for Deep Learning

iVMCL/AOGNets CVPR 2019

This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs.

Certified Defenses against Adversarial Examples

worksheets/0xa21e7940 ICLR 2018

While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs.

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

anishathalye/obfuscated-gradients ICML 2018

We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.

On Evaluating Adversarial Robustness

evaluating-adversarial-robustness/adv-eval-paper 18 Feb 2019

Correctly evaluating defenses against adversarial examples has proven to be extremely difficult.

Decoupled Kullback-Leibler Divergence Loss

jiequancui/DKL 23 May 2023

In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and observe that it is equivalent to the Doupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss and 2) a Cross-Entropy loss incorporating soft labels.

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

BorealisAI/advertorch 20 Feb 2019

advertorch is a toolbox for adversarial robustness research.

Robust Decision Trees Against Adversarial Examples

chenhongge/RobustTrees 27 Feb 2019

Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.

Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

Hadisalman/smoothing-adversarial NeurIPS 2019

In this paper, we employ adversarial training to improve the performance of randomized smoothing.

Testing Robustness Against Unforeseen Adversaries

ddkang/advex-uar 21 Aug 2019

To narrow in on this discrepancy between research and reality we introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries, including eighteen new non-L_p attacks.

ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense

softsys4ai/athena 2 Jan 2020

There has been extensive research on developing defense techniques against adversarial attacks; however, they have been mainly designed for specific model families or application domains, therefore, they cannot be easily extended.