no code implementations • 8 Dec 2023 • Huming Qiu, Junjie Sun, Mi Zhang, Xudong Pan, Min Yang
Deep neural networks (DNNs) are susceptible to backdoor attacks, where malicious functionality is embedded to allow attackers to trigger incorrect classifications.
no code implementations • 1 Oct 2023 • Hua Ma, Shang Wang, Yansong Gao, Zhi Zhang, Huming Qiu, Minhui Xue, Alsharif Abuadbba, Anmin Fu, Surya Nepal, Derek Abbott
In VCB attacks, any sample from a class activates the implanted backdoor when the secret trigger is present.
1 code implementation • 28 Sep 2022 • Jiaguo Yu, Huming Qiu, Dubing Chen, Haofeng Zhang
The development of unsupervised hashing is advanced by the recent popular contrastive learning paradigm.
no code implementations • 13 Apr 2022 • Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao
Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.
no code implementations • 20 Aug 2021 • Hua Ma, Huming Qiu, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Minhui Xue, Anmin Fu, Zhang Jiliang, Said Al-Sarawi, Derek Abbott
This work reveals that the standard quantization toolkits can be abused to activate a backdoor.
no code implementations • 9 May 2021 • Huming Qiu, Hua Ma, Zhi Zhang, Yifeng Zheng, Anmin Fu, Pan Zhou, Yansong Gao, Derek Abbott, Said F. Al-Sarawi
To this end, a 1-bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1-bit.