1 code implementation • 5 Dec 2022 • Bao Gia Doan, Ehsan Abbasnejad, Javen Qinfeng Shi, Damith C. Ranasinghe
We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal.
no code implementations • 21 Jun 2022 • Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, Salil S. Kanhere
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
no code implementations • 19 Nov 2021 • Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe
Now, an adversary can arm themselves with a patch that is naturalistic, less malicious-looking, physically realizable, highly effective achieving high attack success rates, and universal.
1 code implementation • 21 Jul 2020 • Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.
3 code implementations • 23 Nov 2019 • Yansong Gao, Yeonjae Kim, Bao Gia Doan, Zhi Zhang, Gongxuan Zhang, Surya Nepal, Damith C. Ranasinghe, Hyoungshick Kim
In particular, for vision tasks, we can always achieve a 0% FRR and FAR.
Cryptography and Security
1 code implementation • 9 Aug 2019 • Bao Gia Doan, Ehsan Abbasnejad, Damith C. Ranasinghe
Notably, in contrast to existing approaches, our approach removes the need for ground-truth labelled data or anomaly detection methods for Trojan detection or retraining a model or prior knowledge of an attack.
Cryptography and Security