|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.
This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks.
SOTA for Adversarial Defense on CAAD 2018
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system.
Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.