no code implementations • 1 Dec 2021 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Our experimental results indicate that our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
1 code implementation • NeurIPS 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.
1 code implementation • 6 Jul 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.
1 code implementation • 15 Jan 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
The significant advantage of such models is their easy-to-compute inverse.