1 code implementation • 12 Jul 2023 • Jun Niu, Xiaoyan Zhu, Moxuan Zeng, Ge Zhang, Qingyang Zhao, Chunhui Huang, Yangming Zhang, Suyu An, Yangzhong Wang, Xinghui Yue, Zhipeng He, Weihao Guo, Kuo Shen, Peng Liu, Yulong Shen, Xiaohong Jiang, Jianfeng Ma, Yuqing Zhang
We have identified three principles for the proposed "comparing different MI attacks" methodology, and we have designed and implemented the MIBench benchmark with 84 evaluation scenarios for each dataset.
1 code implementation • 6 May 2023 • Hanchi Ren, Jingjing Deng, Xianghua Xie, Xiaoke Ma, Jianfeng Ma
Our proposed learning method is resistant to gradient leakage attacks, and the key-lock module is designed and trained to ensure that, without the private information of the key-lock module: a) reconstructing private training data from the shared gradient is infeasible; and b) the global model's inference performance is significantly compromised.
no code implementations • 11 Feb 2022 • Ruikang Yang, Jianfeng Ma, Yinbin Miao, Xindi Ma
Membership inference attacks can measure the model leakage of source data to a certain degree.
no code implementations • 24 Jan 2022 • Yang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, Jianfeng Ma
First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model.
no code implementations • 23 Sep 2020 • Zhuoran Ma, Jianfeng Ma, Yinbin Miao, Ximeng Liu, Kim-Kwang Raymond Choo, Robert H. Deng
Previous works on federated learning have been inadequate in ensuring the privacy of DIs and the availability of the final federated model.
Cryptography and Security
no code implementations • 9 May 2020 • Zhuzhu Wang, Yilong Yang, Yang Liu, Ximeng Liu, Brij B. Gupta, Jianfeng Ma
In this paper, we propose a secret sharing based federated learning architecture FedXGB to achieve the privacy-preserving extreme gradient boosting for mobile crowdsensing.
no code implementations • 24 Mar 2020 • Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma, Philip Yu, Kui Ren
To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model. In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.
no code implementations • journal 2019 • Zhuo Ma, Haoran Ge, Yang Liu, Meng Zhao, Jianfeng Ma
In this paper, we present a combination method for Android malware detection based on the machine learning algorithm.