1 code implementation • 6 Jul 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.
1 code implementation • NeurIPS 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.
1 code implementation • 15 Jan 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
The significant advantage of such models is their easy-to-compute inverse.
2 code implementations • 1 Dec 2021 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output.
1 code implementation • 13 Sep 2022 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
1 code implementation • 15 Mar 2023 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
In particular, we leverage the power of diffusion models and show that a carefully designed denoising process can counteract the effectiveness of the data-protecting perturbations.
1 code implementation • 13 Oct 2022 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
We show the effectiveness of the proposed method for robust training of DNNs on various poisoned datasets, reducing the backdoor success rate significantly.
no code implementations • 17 Feb 2024 • Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie
Our analysis of these two failure cases of DNNs reveals that finding a unified solution for shortcut learning in DNNs is not out of reach, and TDA can play a significant role in forming such a framework.