1 code implementation • 24 Apr 2024 • Yi Hu, Hanchi Ren, Chen Hu, Jingjing Deng, Xianghua Xie
A central challenge in FL is the effective aggregation of local model weights from disparate and potentially unbalanced participating clients.
1 code implementation • 27 Nov 2023 • Xianghua Xie, Chen Hu, Hanchi Ren, Jingjing Deng
In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors.
1 code implementation • 6 May 2023 • Hanchi Ren, Jingjing Deng, Xianghua Xie, Xiaoke Ma, Jianfeng Ma
Our proposed learning method is resistant to gradient leakage attacks, and the key-lock module is designed and trained to ensure that, without the private information of the key-lock module: a) reconstructing private training data from the shared gradient is infeasible; and b) the global model's inference performance is significantly compromised.
1 code implementation • 2 May 2021 • Hanchi Ren, Jingjing Deng, Xianghua Xie
In this paper, we show that, in the FL system, image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN).
2 code implementations • 14 Jul 2020 • Hanchi Ren, Jingjing Deng, Xianghua Xie, Xiaoke Ma, Yichuan Wang
Typical machine learning approaches require centralized data for model training, which may not be possible where restrictions on data sharing are in place due to, for instance, privacy and gradient protection.