Search Results for author: Adnan Siraj Rakin

Found 19 papers, 7 papers with code

DNN-Defender: An in-DRAM Deep Neural Network Defense Mechanism for Adversarial Weight Attack

no code implementations14 May 2023 Ranyang Zhou, Sabbir Ahmed, Adnan Siraj Rakin, Shaahin Angizi

With deep learning deployed in many security-sensitive areas, machine learning security is becoming progressively important.

SSDA: Secure Source-Free Domain Adaptation

1 code implementation ICCV 2023 Sabbir Ahmed, Abdullah Al Arafat, Mamshad Nayeem Rizve, Rahim Hossain, Zhishan Guo, Adnan Siraj Rakin

Source-free domain adaptation (SFDA) is a popular unsupervised domain adaptation method where a pre-trained model from a source domain is adapted to a target domain without accessing any source data.

Backdoor Attack Model Compression +3

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

1 code implementation CVPR 2022 Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server.

Federated Learning

Rep-Net: Efficient On-Device Learning via Feature Reprogramming

no code implementations CVPR 2022 Li Yang, Adnan Siraj Rakin, Deliang Fan

To develop memory-efficient on-device transfer learning, in this work, we are the first to approach the concept of transfer learning from a new perspective of intermediate feature reprogramming of a pre-trained model (i. e., backbone).

Transfer Learning

DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories

no code implementations8 Nov 2021 Adnan Siraj Rakin, Md Hafizul Islam Chowdhuryy, Fan Yao, Deliang Fan

Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model.

Model extraction

RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery

1 code implementation20 Jan 2021 Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

In this work, we propose RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA.

$DA^3$:Dynamic Additive Attention Adaption for Memory-EfficientOn-Device Multi-Domain Learning

no code implementations2 Dec 2020 Li Yang, Adnan Siraj Rakin, Deliang Fan

We observe that large memory used for activation storage is the bottleneck that largely limits the training time and cost on edge devices.

Deep Attention Domain Adaptation

Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

no code implementations5 Nov 2020 Adnan Siraj Rakin, Yukui Luo, Xiaolin Xu, Deliang Fan

Specifically, she can aggressively overload the shared power distribution system of FPGA with malicious power-plundering circuits, achieving adversarial weight duplication (AWD) hardware attack that duplicates certain DNN weight packages during data transmission between off-chip memory and on-chip buffer, to hijack the DNN function of the victim tenant.

Adversarial Attack Cloud Computing +3

T-BFA: Targeted Bit-Flip Adversarial Weight Attack

2 code implementations24 Jul 2020 Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan

Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory.

Adversarial Attack Image Classification

Robust Machine Learning via Privacy/Rate-Distortion Theory

no code implementations22 Jul 2020 Ye Wang, Shuchin Aeron, Adnan Siraj Rakin, Toshiaki Koike-Akino, Pierre Moulin

Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples.

BIG-bench Machine Learning

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

no code implementations30 Mar 2020 Fan Yao, Adnan Siraj Rakin, Deliang Fan

Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains.

TBT: Targeted Neural Network Attack with Bit Trojan

3 code implementations CVPR 2020 Adnan Siraj Rakin, Zhezhi He, Deliang Fan

However, when the attacker activates the trigger by embedding it with any input, the network is forced to classify all inputs to a certain target class.

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

no code implementations30 May 2019 Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan

In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack.

Adversarial Attack

Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search

1 code implementation ICCV 2019 Adnan Siraj Rakin, Zhezhi He, Deliang Fan

Several important security issues of Deep Neural Network (DNN) have been raised recently associated with different applications and components.

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

1 code implementation CVPR 2019 Adnan Siraj Rakin, Zhezhi He, Deliang Fan

Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation.

Adversarial Attack Adversarial Defense +1

Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples

no code implementations5 Feb 2018 Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan

Blind pre-processing improves the white box attack accuracy of MNIST from 94. 3\% to 98. 7\%.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.