no code implementations • 28 Feb 2024 • Xinjian Luo, Yangfan Jiang, Fei Wei, Yuncheng Wu, Xiaokui Xiao, Beng Chin Ooi
We demonstrate that the sharer can execute fairness poisoning attacks to undermine the receiver's downstream models by manipulating the training data distribution of the diffusion model.
no code implementations • 16 Oct 2023 • Xiaochen Zhu, Xinjian Luo, Yuncheng Wu, Yangfan Jiang, Xiaokui Xiao, Beng Chin Ooi
SDAR leverages auxiliary data and adversarial regularization to learn a decodable simulator of the client's private model, which can effectively infer the client's private features under the vanilla SL, and both features and labels under the U-shaped SL.
1 code implementation • 17 May 2021 • Xinjian Luo, Xiaokui Xiao, Yuncheng Wu, Juncheng Liu, Beng Chin Ooi
InstaHide is a state-of-the-art mechanism for protecting private training images, by mixing multiple private images and modifying them such that their visual features are indistinguishable to the naked eye.
1 code implementation • 20 Oct 2020 • Xinjian Luo, Yuncheng Wu, Xiaokui Xiao, Beng Chin Ooi
Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other.
1 code implementation • 27 Apr 2020 • Xianglong Zhang, Xinjian Luo
In this paper, we exploit defenses against GAN-based attacks in federated learning, and propose a framework, Anti-GAN, to prevent attackers from learning the real distribution of the victim's data.