The library may be used to develop more robust machine learning models and to provide standardized benchmarks of models' performance in the adversarial setting. Section 1 provides an overview of adversarial examples in machine learning and of the CleverHans software.
ICML 2018 • anishathalye/obfuscated-gradients
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented.
ICLR 2018
• facebookresearch/adversarial_image_defenses
•
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier.
facebookresearch/ImageNet-Adversarial-Training
•This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising.
CVPR 2018 • lfz/Guided-Denoise
First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes.
QData/AdversarialDNN-Playground
•
Due to the complex nature of deep learning, it is challenging to understand how deep models can be fooled by adversarial examples. (2) It can help security experts explore more vulnerability of deep learning as a software module.
ICLR 2019
• hendrycks/robustness
•
In this paper we establish rigorous benchmarks for image classifier robustness. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.
ADVERSARIAL ATTACK ADVERSARIAL DEFENSE AUTONOMOUS DRIVING DIMENSIONALITY REDUCTION IMAGE CLASSIFICATION
ICLR 2018 • cihangxie/NIPS2017_adv_challenge_defense
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples.
ICLR 2018 • vtjeng/MIPVerify.jl
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses.