Search Results for author: Ximeng Liu

Found 18 papers, 4 papers with code

An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability

1 code implementation ICCV 2023 Bin Chen, Jia-Li Yin, Shukai Chen, Bo-Hao Chen, Ximeng Liu

Alternatively, model ensemble adversarial attacks are proposed to fuse outputs from surrogate models with diverse architectures to get an ensemble loss, making the generated adversarial example more likely to transfer to other models as it can fool multiple models concurrently.

Adversarial Attack

When Evolutionary Computation Meets Privacy

no code implementations22 Mar 2023 Bowen Zhao, Wei-neng Chen, Xiaoguo Li, Ximeng Liu, Qingqi Pei, Jun Zhang

To this end, in this paper, we discuss three typical optimization paradigms (i. e., \textit{centralized optimization, distributed optimization, and data-driven optimization}) to characterize optimization modes of evolutionary computation and propose BOOM to sort out privacy concerns in evolutionary computation.

Distributed Computing Distributed Optimization +1

SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation

1 code implementation12 Dec 2022 Wanqing Zhu, Jia-Li Yin, Bo-Hao Chen, Ximeng Liu

In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.

Adversarial Robustness Unsupervised Domain Adaptation

Watermarking in Secure Federated Learning: A Verification Framework Based on Client-Side Backdooring

no code implementations14 Nov 2022 Wenyuan Yang, Shuo Shao, Yue Yang, Xiyao Liu, Ximeng Liu, Zhihua Xia, Gerald Schaefer, Hui Fang

In this paper, we propose a novel client-side FL watermarking scheme to tackle the copyright protection issue in secure FL with HE.

Federated Learning

Efficient Vertical Federated Learning Method for Ridge Regression of Large-Scale Samples via Least-Squares Solution

1 code implementation IEEE Transactions on Emerging Topics in Computing 2022 Jianping Cai, Ximeng Liu, Zhiyong Yu, Kun Guo, Jiayin Li

The experiments show that our proposed algorithm takes only about 400 seconds to handle up to 9. 6 million large-scale samples, while the state-of-the-art algorithms take close to 1000 seconds to handle every 1000 samples, which embodies the advantage of our algorithms in handling large-scale samples. δ -data indistinguishability theory, we provide quantitative theoretical guarantees for the security of our algorithms.

Data Integration Vertical Federated Learning

MaskBlock: Transferable Adversarial Examples with Bayes Approach

1 code implementation13 Aug 2022 Mingyuan Fan, Cen Chen, Ximeng Liu, Wenzhong Guo

By contrast, we re-formulate crafting transferable AEs as the maximizing a posteriori probability estimation problem, which is an effective approach to boost the generalization of results with limited available data.

Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons

no code implementations13 Aug 2022 Mingyuan Fan, Yang Liu, Cen Chen, Ximeng Liu, Wenzhong Guo

The opacity of neural networks leads their vulnerability to backdoor attacks, where hidden attention of infected neurons is triggered to override normal predictions to the attacker-chosen ones.

backdoor defense

Evolution as a Service: A Privacy-Preserving Genetic Algorithm for Combinatorial Optimization

no code implementations27 May 2022 Bowen Zhao, Wei-neng Chen, Feng-Feng Wei, Ximeng Liu, Qingqi Pei, Jun Zhang

Specifically, PEGA enables users outsourcing COPs to the cloud server holding a competitive GA and approximating the optimal solution in a privacy-preserving manner.

Combinatorial Optimization Evolutionary Algorithms +2

Enhance transferability of adversarial examples with model architecture

no code implementations28 Feb 2022 Mingyuan Fan, Wenzhong Guo, Shengxing Yu, Zuobin Ying, Ximeng Liu

Transferability of adversarial examples is of critical importance to launch black-box adversarial attacks, where attackers are only allowed to access the output of the target model.

Backdoor Defense with Machine Unlearning

no code implementations24 Jan 2022 Yang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, Jianfeng Ma

First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model.

backdoor defense Machine Unlearning

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

no code implementations1 Dec 2021 Jia-Li Yin, Lehui Xie, Wanqing Zhu, Ximeng Liu, Bo-Hao Chen

However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy.

Adversarial Robustness

When Crowdsensing Meets Federated Learning: Privacy-Preserving Mobile Crowdsensing System

no code implementations20 Feb 2021 Bowen Zhao, Ximeng Liu, Wei-neng Chen

Specifically, in order to protect privacy, participants locally process sensing data via federated learning and only upload encrypted training models.

Federated Learning Privacy Preserving

Robust Single-step Adversarial Training with Regularizer

no code implementations5 Feb 2021 Lehui Xie, Yaopeng Wang, Jia-Li Yin, Ximeng Liu

Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch.

Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud

no code implementations23 Sep 2020 Zhuoran Ma, Jianfeng Ma, Yinbin Miao, Ximeng Liu, Kim-Kwang Raymond Choo, Robert H. Deng

Previous works on federated learning have been inadequate in ensuring the privacy of DIs and the availability of the final federated model.

Cryptography and Security

Cloud-based Federated Boosting for Mobile Crowdsensing

no code implementations9 May 2020 Zhuzhu Wang, Yilong Yang, Yang Liu, Ximeng Liu, Brij B. Gupta, Jianfeng Ma

In this paper, we propose a secret sharing based federated learning architecture FedXGB to achieve the privacy-preserving extreme gradient boosting for mobile crowdsensing.

Federated Learning General Classification +3

Learn to Forget: Machine Unlearning via Neuron Masking

no code implementations24 Mar 2020 Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma, Philip Yu, Kui Ren

To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model. In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.

BIG-bench Machine Learning Federated Learning +2

Revocable Federated Learning: A Benchmark of Federated Forest

no code implementations8 Nov 2019 Yang Liu, Zhuo Ma, Ximeng Liu, Zhuzhu Wang, Siqi Ma, Ken Ren

A learning federation is composed of multiple participants who use the federated learning technique to collaboratively train a machine learning model without directly revealing the local data.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.