no code implementations • 8 Dec 2024 • Kaleel Mahmood, Shaoyi Huang
One of the important works in this respect is the Perceiver class of architectures that have demonstrated excellent performance while reducing the computation complexity.
no code implementations • 18 Nov 2024 • Nicole Meng, Caleb Manicke, David Chen, Yingjie Lao, Caiwen Ding, Pengyu Hong, Kaleel Mahmood
For generating adversarial examples, decision based black-box attacks are one of the most practical techniques as they only require query access to the model.
no code implementations • 25 May 2024 • Jieren Deng, Hanbin Hong, Aaron Palmer, Xin Zhou, Jinbo Bi, Kaleel Mahmood, Yuan Hong, Derek Aguiar
Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations.
no code implementations • 23 Feb 2024 • Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e. g., self-driving cars or medical imaging.
1 code implementation • ICCV 2023 • Hongwu Peng, Shaoyi Huang, Tong Zhou, Yukui Luo, Chenghong Wang, Zigeng Wang, Jiahui Zhao, Xi Xie, Ang Li, Tony Geng, Kaleel Mahmood, Wujie Wen, Xiaolin Xu, Caiwen Ding
The growth of the Machine-Learning-As-A-Service (MLaaS) market has highlighted clients' data privacy and security issues.
1 code implementation • 20 May 2023 • Lijun Zhang, Xiao Liu, Kaleel Mahmood, Caiwen Ding, Hui Guan
We then introduce a novel attack framework, the Gradient Balancing Multi-Task Attack (GB-MTA), which treats attacking a multi-task model as an optimization problem.
no code implementations • 24 Apr 2023 • Shaoyi Huang, Haowen Fang, Kaleel Mahmood, Bowen Lei, Nuo Xu, Bin Lei, Yue Sun, Dongkuan Xu, Wujie Wen, Caiwen Ding
Experimental results show that NDSNN achieves up to 20. 52\% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99\%) as compared to other SOTA methods (e. g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN).
1 code implementation • 26 Nov 2022 • Ethan Rathbun, Kaleel Mahmood, Sohaib Ahmad, Caiwen Ding, Marten van Dijk
First, how can the low transferability between defenses be utilized in a game theoretic framework to improve the robustness?
no code implementations • 22 Sep 2022 • Sohaib Ahmad, Benjamin Fuller, Kaleel Mahmood
By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data.
no code implementations • 7 Sep 2022 • Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, Wujie Wen
First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs.
no code implementations • 29 Sep 2021 • Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk
In this paper, we seek to help alleviate this problem by systematizing the recent advances in adversarial machine learning black-box attacks since 2019.
1 code implementation • ICCV 2021 • Kaleel Mahmood, Rigel Mahmood, Marten van Dijk
In this paper, we study the robustness of Vision Transformers to adversarial examples.
no code implementations • 1 Jan 2021 • Kaleel Mahmood, Phuong Ha Nguyen, Lam M. Nguyen, Thanh V Nguyen, Marten van Dijk
Based on our study of these defenses, we develop three contributions.
1 code implementation • 18 Jun 2020 • Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen
We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.
no code implementations • 3 Oct 2019 • Kaleel Mahmood, Phuong Ha Nguyen, Lam M. Nguyen, Thanh Nguyen, Marten van Dijk
We argue that our defense based on buffer zones offers significant improvements over state-of-the-art defenses.