no code implementations • 3 Feb 2023 • Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao
In a number of practical scenarios, VFL is more relevant than HFL as different companies (e. g., bank and retailer) hold different features (e. g., credit history and shopping history) for the same set of customers.
no code implementations • 6 Sep 2022 • Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.
no code implementations • 31 May 2022 • Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, Willy Susilo, Dongxi Liu
Compared with a representative SSBA as a baseline ($SSBA_{Base}$), $CASSOCK$-based attacks have significantly advanced the attack performance, i. e., higher ASR and lower FPR with comparable CDA (clean data accuracy).
no code implementations • 13 Apr 2022 • Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao
Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.
no code implementations • 5 Apr 2022 • Qianru Zhou, Rongzhen Li, Lei Xu, Arumugam Nallanathan, Jian Yang, Anmin Fu
With the ever increasing of new intrusions, intrusion detection task rely on Artificial Intelligence more and more.
no code implementations • 10 Feb 2022 • Chunyi Zhou, Yansong Gao, Anmin Fu, Kai Chen, Zhiyang Dai, Zhi Zhang, Minhui Xue, Yuqing Zhang
By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus the user's preference of the class is exposed.
no code implementations • 21 Jan 2022 • Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
The averaged ASR still remains sufficiently high to be 78% in the transfer learning attack scenarios evaluated on CenterNet.
no code implementations • 22 Nov 2021 • Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott
A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.
no code implementations • 20 Aug 2021 • Hua Ma, Huming Qiu, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Minhui Xue, Anmin Fu, Zhang Jiliang, Said Al-Sarawi, Derek Abbott
This work reveals that the standard quantization toolkits can be abused to activate a backdoor.
no code implementations • 9 May 2021 • Huming Qiu, Hua Ma, Zhi Zhang, Yifeng Zheng, Anmin Fu, Pan Zhou, Yansong Gao, Derek Abbott, Said F. Al-Sarawi
To this end, a 1-bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1-bit.
no code implementations • 27 Jul 2020 • Anmin Fu, Xianglong Zhang, Naixue Xiong, Yansong Gao, Huaqun Wang
If no more than n-2 of n participants collude with the aggregation server, VFL could guarantee the encrypted gradients of other participants not being inverted.
Cryptography and Security E.3; I.2.11
1 code implementation • 21 Jul 2020 • Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.