Search Results for author: Hadi M. Dolatabadi

Found 8 papers, 7 papers with code

Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning

no code implementations17 Feb 2024 Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie

Our analysis of these two failure cases of DNNs reveals that finding a unified solution for shortcut learning in DNNs is not out of reach, and TDA can play a significant role in forming such a framework.

Decision Making Topological Data Analysis

The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models

1 code implementation15 Mar 2023 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

In particular, we leverage the power of diffusion models and show that a carefully designed denoising process can counteract the effectiveness of the data-protecting perturbations.

Denoising

COLLIDER: A Robust Training Framework for Backdoor Data

1 code implementation13 Oct 2022 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

We show the effectiveness of the proposed method for robust training of DNNs on various poisoned datasets, reducing the backdoor success rate significantly.

Adversarial Coreset Selection for Efficient Robust Training

1 code implementation13 Sep 2022 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.

$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training

2 code implementations1 Dec 2021 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output.

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

1 code implementation NeurIPS 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.

Adversarial Attack

Black-box Adversarial Example Generation with Normalizing Flows

1 code implementation6 Jul 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.

Adversarial Attack

Invertible Generative Modeling using Linear Rational Splines

1 code implementation15 Jan 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

The significant advantage of such models is their easy-to-compute inverse.

Cannot find the paper you are looking for? You can Submit a new open access paper.