Search Results for author: Ethan Rathbun

Found 4 papers, 1 papers with code

Distilling Adversarial Robustness Using Heterogeneous Teachers

no code implementations23 Feb 2024 Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar

Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e. g., self-driving cars or medical imaging.

Adversarial Robustness Knowledge Distillation +1

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

1 code implementation26 Nov 2022 Ethan Rathbun, Kaleel Mahmood, Sohaib Ahmad, Caiwen Ding, Marten van Dijk

First, how can the low transferability between defenses be utilized in a game theoretic framework to improve the robustness?

Adversarial Defense

Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples

no code implementations7 Sep 2022 Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, Wujie Wen

First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs.

Adversarial Attack

Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks

no code implementations29 Sep 2021 Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk

In this paper, we seek to help alleviate this problem by systematizing the recent advances in adversarial machine learning black-box attacks since 2019.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.