Search Results for author: Anmin Fu

Found 12 papers, 1 papers with code

Vertical Federated Learning: Taxonomies, Threats, and Prospects

no code implementations3 Feb 2023 Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao

In a number of practical scenarios, VFL is more relevant than HFL as different companies (e. g., bank and retailer) hold different features (e. g., credit history and shopping history) for the same set of customers.

Federated Learning

MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World

no code implementations6 Sep 2022 Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott

We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.

Event Detection Image Classification +3

CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences

no code implementations31 May 2022 Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, Willy Susilo, Dongxi Liu

Compared with a representative SSBA as a baseline ($SSBA_{Base}$), $CASSOCK$-based attacks have significantly advanced the attack performance, i. e., higher ASR and lower FPR with comparable CDA (clean data accuracy).

Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures

no code implementations13 Apr 2022 Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao

Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.

Towards Explainable Meta-Learning for DDoS Detection

no code implementations5 Apr 2022 Qianru Zhou, Rongzhen Li, Lei Xu, Arumugam Nallanathan, Jian Yang, Anmin Fu

With the ever increasing of new intrusions, intrusion detection task rely on Artificial Intelligence more and more.

Intrusion Detection Meta-Learning

PPA: Preference Profiling Attack Against Federated Learning

no code implementations10 Feb 2022 Chunyi Zhou, Yansong Gao, Anmin Fu, Kai Chen, Zhiyang Dai, Zhi Zhang, Minhui Xue, Yuqing Zhang

By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus the user's preference of the class is exposed.

Federated Learning Inference Attack

NTD: Non-Transferability Enabled Backdoor Detection

no code implementations22 Nov 2021 Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott

A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.

Face Recognition Traffic Sign Recognition

RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things

no code implementations9 May 2021 Huming Qiu, Hua Ma, Zhi Zhang, Yifeng Zheng, Anmin Fu, Pan Zhou, Yansong Gao, Derek Abbott, Said F. Al-Sarawi

To this end, a 1-bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1-bit.


VFL: A Verifiable Federated Learning with Privacy-Preserving for Big Data in Industrial IoT

no code implementations27 Jul 2020 Anmin Fu, Xianglong Zhang, Naixue Xiong, Yansong Gao, Huaqun Wang

If no more than n-2 of n participants collude with the aggregation server, VFL could guarantee the encrypted gradients of other participants not being inverted.

Cryptography and Security E.3; I.2.11

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

1 code implementation21 Jul 2020 Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim

We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.