Search Results for author: Shengshan Hu

Found 23 papers, 11 papers with code

Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

no code implementations29 Oct 2019 Lingchen Zhao, Shengshan Hu, Qian Wang, Jianlin Jiang, Chao Shen, Xiangyang Luo, Pengfei Hu

Collaborative learning allows multiple clients to train a joint model without sharing their data with each other.

Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions

no code implementations22 Feb 2020 Minghui Li, Sherman S. M. Chow, Shengshan Hu, Yuejing Yan, Chao Shen, Qian Wang

This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting, i. e., the server cannot learn the query, (intermediate) results, and the model.

Privacy Preserving

Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning

no code implementations29 Dec 2021 Junyu Shi, Wei Wan, Shengshan Hu, Jianrong Lu, Leo Yu Zhang

Then we propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.

Federated Learning

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

1 code implementation CVPR 2022 Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu

While deep face recognition (FR) systems have shown amazing performance in identification and verification, they also arouse privacy concerns for their excessive surveillance on users, especially for public face images widely spread on social networks.

Face Recognition

Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation

no code implementations8 Mar 2022 Xiaogeng Liu, Haoyu Wang, Yechao Zhang, Fangzhou Wu, Shengshan Hu

The data-centric machine learning aims to find effective ways to build appropriate datasets which can improve the performance of AI models.

BIG-bench Machine Learning Data Augmentation

Towards Privacy-Preserving Neural Architecture Search

no code implementations22 Apr 2022 Fuyi Wang, Leo Yu Zhang, Lei Pan, Shengshan Hu, Robin Doss

Machine learning promotes the continuous development of signal processing in various fields, including network traffic monitoring, EEG classification, face identification, and many more.

BIG-bench Machine Learning EEG +3

BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

1 code implementation1 Jul 2022 Shengshan Hu, Ziqi Zhou, Yechao Zhang, Leo Yu Zhang, Yifeng Zheng, Yuanyuan HE, Hai Jin

In this paper, we propose BadHash, the first generative-based imperceptible backdoor attack against deep hashing, which can effectively generate invisible and input-specific poisoned images with clean label.

Backdoor Attack Contrastive Learning +4

PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples

no code implementations22 Nov 2022 Shengshan Hu, Junwei Zhang, Wei Liu, Junhui Hou, Minghui Li, Leo Yu Zhang, Hai Jin, Lichao Sun

In addition, existing attack approaches towards point cloud classifiers cannot be applied to the completion models due to different output forms and attack purposes.

Adversarial Attack Point Cloud Classification +2

Masked Language Model Based Textual Adversarial Example Detection

1 code implementation18 Apr 2023 Xiaomei Zhang, Zhaoxi Zhang, Qi Zhong, Xufei Zheng, Yanjun Zhang, Shengshan Hu, Leo Yu Zhang

To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model.

Adversarial Defense Language Modelling +1

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

no code implementations21 Apr 2023 Hangtao Zhang, Zeming Yao, Leo Yu Zhang, Shengshan Hu, Chao Chen, Alan Liew, Zhetao Li

Federated learning (FL) is vulnerable to poisoning attacks, where adversaries corrupt the global aggregation results and cause denial-of-service (DoS).

Federated Learning Model Poisoning

Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training

1 code implementation15 Jul 2023 Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin

Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability.

Attribute Data Augmentation

Downstream-agnostic Adversarial Examples

1 code implementation ICCV 2023 Ziqi Zhou, Shengshan Hu, Ruizhi Zhao, Qian Wang, Leo Yu Zhang, Junhui Hou, Hai Jin

AdvEncoder aims to construct a universal adversarial perturbation or patch for a set of natural images that can fool all the downstream tasks inheriting the victim pre-trained encoder.

Self-Supervised Learning

Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples

1 code implementation ICCV 2023 Qiufan Ji, Lin Wang, Cong Shi, Shengshan Hu, Yingying Chen, Lichao Sun

In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark to evaluate adversarial robustness, which can provide a detailed understanding of the effects of the defense and attack methods.

Adversarial Robustness Benchmarking

AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning

1 code implementation14 Aug 2023 Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, Hai Jin

In this work, we propose AdvCLIP, the first attack framework for generating downstream-agnostic adversarial examples based on cross-modal pre-trained encoders.

Contrastive Learning Generative Adversarial Network +2

Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations

1 code implementation30 Nov 2023 Xianlong Wang, Shengshan Hu, Minghui Li, Zhifei Yu, Ziqi Zhou, Leo Yu Zhang

Through validation experiments that commendably support our hypothesis, we further design a random matrix to boost both $\Theta_{imi}$ and $\Theta_{imc}$, achieving a notable degree of defense effect.

MISA: Unveiling the Vulnerabilities in Split Federated Learning

no code implementations18 Dec 2023 Wei Wan, Yuxuan Ning, Shengshan Hu, Lulu Xue, Minghui Li, Leo Yu Zhang, Hai Jin

This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks.

Edge-computing Federated Learning

Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

no code implementations30 Jan 2024 Lulu Xue, Shengshan Hu, Ruizhi Zhao, Leo Yu Zhang, Shengqing Hu, Lichao Sun, Dezhong Yao

To mitigate the weaknesses of existing solutions, we propose a novel defense method, Dual Gradient Pruning (DGP), based on gradient pruning, which can improve communication efficiency while preserving the utility and privacy of CL.

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

1 code implementation16 Mar 2024 Ziqi Zhou, Minghui Li, Wei Liu, Shengshan Hu, Yechao Zhang, Wei Wan, Lulu Xue, Leo Yu Zhang, Dezhong Yao, Hai Jin

In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models.

Self-Supervised Learning

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness

no code implementations17 Apr 2024 Hangtao Zhang, Shengshan Hu, Yichen Wang, Leo Yu Zhang, Ziqi Zhou, Xianlong Wang, Yanjun Zhang, Chao Chen

This paper is dedicated to bridging this gap by introducing Detector Collapse} (DC), a brand-new backdoor attack paradigm tailored for object detection.

Autonomous Driving Backdoor Attack +3

Cannot find the paper you are looking for? You can Submit a new open access paper.