Search Results for author: Leo Yu Zhang

Found 21 papers, 8 papers with code

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

1 code implementation16 Mar 2024 Ziqi Zhou, Minghui Li, Wei Liu, Shengshan Hu, Yechao Zhang, Wei Wan, Lulu Xue, Leo Yu Zhang, Dezhong Yao, Hai Jin

In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models.

Self-Supervised Learning

Fluent: Round-efficient Secure Aggregation for Private Federated Learning

no code implementations10 Mar 2024 Xincheng Li, Jianting Ning, Geong Sen Poh, Leo Yu Zhang, Xinchun Yin, Tianwei Zhang

Fluent also reduces the communication overhead for the server at the expense of a marginal increase in computational cost.

Federated Learning

Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

no code implementations30 Jan 2024 Lulu Xue, Shengshan Hu, Ruizhi Zhao, Leo Yu Zhang, Shengqing Hu, Lichao Sun, Dezhong Yao

To mitigate the weaknesses of existing solutions, we propose a novel defense method, Dual Gradient Pruning (DGP), based on gradient pruning, which can improve communication efficiency while preserving the utility and privacy of CL.

MISA: Unveiling the Vulnerabilities in Split Federated Learning

no code implementations18 Dec 2023 Wei Wan, Yuxuan Ning, Shengshan Hu, Lulu Xue, Minghui Li, Leo Yu Zhang, Hai Jin

This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks.

Edge-computing Federated Learning

Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations

1 code implementation30 Nov 2023 Xianlong Wang, Shengshan Hu, Minghui Li, Zhifei Yu, Ziqi Zhou, Leo Yu Zhang

Through validation experiments that commendably support our hypothesis, we further design a random matrix to boost both $\Theta_{imi}$ and $\Theta_{imc}$, achieving a notable degree of defense effect.

AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

no code implementations13 Nov 2023 Zirui Gong, Liyue Shen, Yanjun Zhang, Leo Yu Zhang, Jingwei Wang, Guangdong Bai, Yong Xiang

By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model's robustness, maintaining its fidelity and improving overall efficiency.

Federated Learning

Towards Self-Interpretable Graph-Level Anomaly Detection

no code implementations NeurIPS 2023 Yixin Liu, Kaize Ding, Qinghua Lu, Fuyi Li, Leo Yu Zhang, Shirui Pan

In this paper, we investigate a new challenging problem, explainable GLAD, where the learning objective is to predict the abnormality of each graph sample with corresponding explanations, i. e., the vital subgraph that leads to the predictions.

Graph Anomaly Detection

Turn Passive to Active: A Survey on Active Intellectual Property Protection of Deep Learning Models

no code implementations15 Oct 2023 Mingfu Xue, Leo Yu Zhang, Yushu Zhang, Weiqiang Liu

In this review, we attempt to clearly elaborate on the connotation, attributes, and requirements of active DNN copyright protection, provide evaluation methods and metrics for active copyright protection, review and analyze existing work on active DL model intellectual property protection, discuss potential attacks that active DL model copyright protection techniques may face, and provide challenges and future directions for active DL model intellectual property protection.

Management

Client-side Gradient Inversion Against Federated Learning from Poisoning

no code implementations14 Sep 2023 Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Chao Chen, Shirui Pan, Kok-Leong Ong, Jun Zhang, Yang Xiang

For the first time, we show the feasibility of a client-side adversary with limited knowledge being able to recover the training samples from the aggregated global model.

Federated Learning

Downstream-agnostic Adversarial Examples

1 code implementation ICCV 2023 Ziqi Zhou, Shengshan Hu, Ruizhi Zhao, Qian Wang, Leo Yu Zhang, Junhui Hou, Hai Jin

AdvEncoder aims to construct a universal adversarial perturbation or patch for a set of natural images that can fool all the downstream tasks inheriting the victim pre-trained encoder.

Self-Supervised Learning

Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training

1 code implementation15 Jul 2023 Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin

Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability.

Attribute Data Augmentation

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

no code implementations21 Apr 2023 Hangtao Zhang, Zeming Yao, Leo Yu Zhang, Shengshan Hu, Chao Chen, Alan Liew, Zhetao Li

Federated learning (FL) is vulnerable to poisoning attacks, where adversaries corrupt the global aggregation results and cause denial-of-service (DoS).

Federated Learning Model Poisoning

Masked Language Model Based Textual Adversarial Example Detection

1 code implementation18 Apr 2023 Xiaomei Zhang, Zhaoxi Zhang, Qi Zhong, Xufei Zheng, Yanjun Zhang, Shengshan Hu, Leo Yu Zhang

To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model.

Adversarial Defense Language Modelling +1

PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples

no code implementations22 Nov 2022 Shengshan Hu, Junwei Zhang, Wei Liu, Junhui Hou, Minghui Li, Leo Yu Zhang, Hai Jin, Lichao Sun

In addition, existing attack approaches towards point cloud classifiers cannot be applied to the completion models due to different output forms and attack purposes.

Adversarial Attack Point Cloud Classification +2

BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

1 code implementation1 Jul 2022 Shengshan Hu, Ziqi Zhou, Yechao Zhang, Leo Yu Zhang, Yifeng Zheng, Yuanyuan HE, Hai Jin

In this paper, we propose BadHash, the first generative-based imperceptible backdoor attack against deep hashing, which can effectively generate invisible and input-specific poisoned images with clean label.

Backdoor Attack Contrastive Learning +4

Towards Privacy-Preserving Neural Architecture Search

no code implementations22 Apr 2022 Fuyi Wang, Leo Yu Zhang, Lei Pan, Shengshan Hu, Robin Doss

Machine learning promotes the continuous development of signal processing in various fields, including network traffic monitoring, EEG classification, face identification, and many more.

BIG-bench Machine Learning EEG +3

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

1 code implementation CVPR 2022 Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu

While deep face recognition (FR) systems have shown amazing performance in identification and verification, they also arouse privacy concerns for their excessive surveillance on users, especially for public face images widely spread on social networks.

Face Recognition

Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning

no code implementations29 Dec 2021 Junyu Shi, Wei Wan, Shengshan Hu, Jianrong Lu, Leo Yu Zhang

Then we propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.

Federated Learning

Self-Supervised Adversarial Example Detection by Disentangled Representation

no code implementations NeurIPS 2021 Zhaoxi Zhang, Leo Yu Zhang, Xufei Zheng, Jinyu Tian, Jiantao Zhou

To alleviate this problem, we explore how to detect adversarial examples with disentangled label/semantic features under the autoencoder structure.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.