|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.
Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0. 5\%$.
We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions.