Search Results for author: Shengshan Hu

Found 9 papers, 2 papers with code

Towards Privacy-Preserving Neural Architecture Search

no code implementations22 Apr 2022 Fuyi Wang, Leo Yu Zhang, Lei Pan, Shengshan Hu, Robin Doss

Machine learning promotes the continuous development of signal processing in various fields, including network traffic monitoring, EEG classification, face identification, and many more.

EEG Face Identification +1

Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation

no code implementations8 Mar 2022 Xiaogeng Liu, Haoyu Wang, Yechao Zhang, Fangzhou Wu, Shengshan Hu

The data-centric machine learning aims to find effective ways to build appropriate datasets which can improve the performance of AI models.

Data Augmentation

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

1 code implementation7 Mar 2022 Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu

While deep face recognition (FR) systems have shown amazing performance in identification and verification, they also arouse privacy concerns for their excessive surveillance on users, especially for public face images widely spread on social networks.

Face Recognition

Challenges and approaches for mitigating byzantine attacks in federated learning

no code implementations29 Dec 2021 Shengshan Hu, Jianrong Lu, Wei Wan, Leo Yu Zhang

Then we propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.

Federated Learning

Self-Supervised Adversarial Example Detection by Disentangled Representation

no code implementations NeurIPS 2021 Zhaoxi Zhang, Leo Yu Zhang, Xufei Zheng, Shengshan Hu, Jinyu Tian, Jiantao Zhou

We compare our method with the state-of-the-art self-supervised detection methods under different adversarial attacks and different victim models (30 attack settings), and it exhibits better performance in various measurements (AUC, FPR, TPR) for most attacks settings.

Adversarial Attack

Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions

no code implementations22 Feb 2020 Minghui Li, Sherman S. M. Chow, Shengshan Hu, Yuejing Yan, Chao Shen, Qian Wang

This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting, i. e., the server cannot learn the query, (intermediate) results, and the model.

Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

no code implementations29 Oct 2019 Lingchen Zhao, Shengshan Hu, Qian Wang, Jianlin Jiang, Chao Shen, Xiangyang Luo, Pengfei Hu

Collaborative learning allows multiple clients to train a joint model without sharing their data with each other.

Cannot find the paper you are looking for? You can Submit a new open access paper.