Search Results for author: Hadi M. Dolatabadi

Found 4 papers, 3 papers with code

$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training

no code implementations1 Dec 2021 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Our experimental results indicate that our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

1 code implementation NeurIPS 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.

Adversarial Attack

Black-box Adversarial Example Generation with Normalizing Flows

1 code implementation6 Jul 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.

Adversarial Attack

Invertible Generative Modeling using Linear Rational Splines

1 code implementation15 Jan 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

The significant advantage of such models is their easy-to-compute inverse.

Cannot find the paper you are looking for? You can Submit a new open access paper.