Search Results for author: Kaleel Mahmood

Found 12 papers, 5 papers with code

Distilling Adversarial Robustness Using Heterogeneous Teachers

no code implementations23 Feb 2024 Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar

Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e. g., self-driving cars or medical imaging.

Adversarial Robustness Knowledge Distillation +1

Multi-Task Models Adversarial Attacks

1 code implementation20 May 2023 Lijun Zhang, Xiao Liu, Kaleel Mahmood, Caiwen Ding, Hui Guan

We then introduce a novel attack framework, the Gradient Balancing Multi-Task Attack (GB-MTA), which treats attacking a multi-task model as an optimization problem.

Multi-Task Learning

Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

no code implementations24 Apr 2023 Shaoyi Huang, Haowen Fang, Kaleel Mahmood, Bowen Lei, Nuo Xu, Bin Lei, Yue Sun, Dongkuan Xu, Wujie Wen, Caiwen Ding

Experimental results show that NDSNN achieves up to 20. 52\% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99\%) as compared to other SOTA methods (e. g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN).

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

1 code implementation26 Nov 2022 Ethan Rathbun, Kaleel Mahmood, Sohaib Ahmad, Caiwen Ding, Marten van Dijk

First, how can the low transferability between defenses be utilized in a game theoretic framework to improve the robustness?

Adversarial Defense

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

no code implementations22 Sep 2022 Sohaib Ahmad, Benjamin Fuller, Kaleel Mahmood

By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data.

Inference Attack Membership Inference Attack

Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples

no code implementations7 Sep 2022 Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, Wujie Wen

First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs.

Adversarial Attack

Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks

no code implementations29 Sep 2021 Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk

In this paper, we seek to help alleviate this problem by systematizing the recent advances in adversarial machine learning black-box attacks since 2019.

BIG-bench Machine Learning

Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples

1 code implementation18 Jun 2020 Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen

We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.

Cannot find the paper you are looking for? You can Submit a new open access paper.