1 code implementation • ICCV 2023 • Bin Chen, Jia-Li Yin, Shukai Chen, Bo-Hao Chen, Ximeng Liu
Alternatively, model ensemble adversarial attacks are proposed to fuse outputs from surrogate models with diverse architectures to get an ensemble loss, making the generated adversarial example more likely to transfer to other models as it can fool multiple models concurrently.
no code implementations • 1 Jul 2023 • Zekai Chen, Fuyi Wang, Zhiwei Zheng, Ximeng Liu, Yujie Lin
This ensures that Fedward can maintain the performance for the Non-IID scenario.
no code implementations • 22 Mar 2023 • Bowen Zhao, Wei-neng Chen, Xiaoguo Li, Ximeng Liu, Qingqi Pei, Jun Zhang
To this end, in this paper, we discuss three typical optimization paradigms (i. e., \textit{centralized optimization, distributed optimization, and data-driven optimization}) to characterize optimization modes of evolutionary computation and propose BOOM to sort out privacy concerns in evolutionary computation.
1 code implementation • 12 Dec 2022 • Wanqing Zhu, Jia-Li Yin, Bo-Hao Chen, Ximeng Liu
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
no code implementations • 14 Nov 2022 • Wenyuan Yang, Shuo Shao, Yue Yang, Xiyao Liu, Ximeng Liu, Zhihua Xia, Gerald Schaefer, Hui Fang
In this paper, we propose a novel client-side FL watermarking scheme to tackle the copyright protection issue in secure FL with HE.
1 code implementation • IEEE Transactions on Emerging Topics in Computing 2022 • Jianping Cai, Ximeng Liu, Zhiyong Yu, Kun Guo, Jiayin Li
The experiments show that our proposed algorithm takes only about 400 seconds to handle up to 9. 6 million large-scale samples, while the state-of-the-art algorithms take close to 1000 seconds to handle every 1000 samples, which embodies the advantage of our algorithms in handling large-scale samples. δ -data indistinguishability theory, we provide quantitative theoretical guarantees for the security of our algorithms.
1 code implementation • 13 Aug 2022 • Mingyuan Fan, Cen Chen, Ximeng Liu, Wenzhong Guo
By contrast, we re-formulate crafting transferable AEs as the maximizing a posteriori probability estimation problem, which is an effective approach to boost the generalization of results with limited available data.
no code implementations • 13 Aug 2022 • Mingyuan Fan, Yang Liu, Cen Chen, Ximeng Liu, Wenzhong Guo
The opacity of neural networks leads their vulnerability to backdoor attacks, where hidden attention of infected neurons is triggered to override normal predictions to the attacker-chosen ones.
no code implementations • 27 May 2022 • Bowen Zhao, Wei-neng Chen, Feng-Feng Wei, Ximeng Liu, Qingqi Pei, Jun Zhang
Specifically, PEGA enables users outsourcing COPs to the cloud server holding a competitive GA and approximating the optimal solution in a privacy-preserving manner.
no code implementations • 28 Feb 2022 • Mingyuan Fan, Wenzhong Guo, Shengxing Yu, Zuobin Ying, Ximeng Liu
Transferability of adversarial examples is of critical importance to launch black-box adversarial attacks, where attackers are only allowed to access the output of the target model.
no code implementations • 24 Jan 2022 • Yang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, Jianfeng Ma
First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model.
no code implementations • 1 Dec 2021 • Jia-Li Yin, Lehui Xie, Wanqing Zhu, Ximeng Liu, Bo-Hao Chen
However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy.
no code implementations • 20 Feb 2021 • Bowen Zhao, Ximeng Liu, Wei-neng Chen
Specifically, in order to protect privacy, participants locally process sensing data via federated learning and only upload encrypted training models.
no code implementations • 5 Feb 2021 • Lehui Xie, Yaopeng Wang, Jia-Li Yin, Ximeng Liu
Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch.
no code implementations • 23 Sep 2020 • Zhuoran Ma, Jianfeng Ma, Yinbin Miao, Ximeng Liu, Kim-Kwang Raymond Choo, Robert H. Deng
Previous works on federated learning have been inadequate in ensuring the privacy of DIs and the availability of the final federated model.
Cryptography and Security
no code implementations • 9 May 2020 • Zhuzhu Wang, Yilong Yang, Yang Liu, Ximeng Liu, Brij B. Gupta, Jianfeng Ma
In this paper, we propose a secret sharing based federated learning architecture FedXGB to achieve the privacy-preserving extreme gradient boosting for mobile crowdsensing.
no code implementations • 24 Mar 2020 • Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma, Philip Yu, Kui Ren
To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model. In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.
no code implementations • 8 Nov 2019 • Yang Liu, Zhuo Ma, Ximeng Liu, Zhuzhu Wang, Siqi Ma, Ken Ren
A learning federation is composed of multiple participants who use the federated learning technique to collaboratively train a machine learning model without directly revealing the local data.