no code implementations • 15 Mar 2023 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Protecting personal data against the exploitation of machine learning models is of paramount importance.
1 code implementation • 13 Oct 2022 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
We show the effectiveness of the proposed method for robust training of DNNs on various poisoned datasets, reducing the backdoor success rate significantly.
no code implementations • 13 Sep 2022 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
1 code implementation • 1 Dec 2021 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output.
1 code implementation • NeurIPS 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.
1 code implementation • 6 Jul 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.
1 code implementation • 15 Jan 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
The significant advantage of such models is their easy-to-compute inverse.