1 code implementation • 6 Sep 2022 • Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.
no code implementations • 21 Jan 2022 • Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
The averaged ASR still remains sufficiently high to be 78% in the transfer learning attack scenarios evaluated on CenterNet.
no code implementations • 22 Nov 2021 • Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott
A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.
no code implementations • 9 May 2021 • Huming Qiu, Hua Ma, Zhi Zhang, Yifeng Zheng, Anmin Fu, Pan Zhou, Yansong Gao, Derek Abbott, Said F. Al-Sarawi
To this end, a 1-bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1-bit.
no code implementations • 20 Jun 2017 • Yansong Gao, Said F. Al-Sarawi, Derek Abbott, Ahmad-Reza Sadeghi, Damith C. Ranasinghe
Physical unclonable functions (PUFs), as hardware security primitives, exploit manufacturing randomness to extract hardware instance-specific secrets.
Cryptography and Security