Search Results for author: Xiaoyu Cao

Found 22 papers, 8 papers with code

TOPIC: A Parallel Association Paradigm for Multi-Object Tracking under Complex Motions and Diverse Scenes

2 code implementations22 Aug 2023 Xiaoyan Cao, Yiyao Zheng, Yao Yao, Huapeng Qin, Xiaoyu Cao, Shihui Guo

Existing trackers can be categorized into two association paradigms: single-feature paradigm (based on either motion or appearance feature) and serial paradigm (one feature serves as secondary while the other is primary).

Multi-Object Tracking

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

no code implementations20 Oct 2022 Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong

Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them.

Federated Learning

FLCert: Provably Secure Federated Learning against Poisoning Attacks

no code implementations2 Oct 2022 Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong

Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input.

Federated Learning

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

1 code implementation19 Jul 2022 Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients.

Federated Learning Model Poisoning

A dataset of ant colonies motion trajectories in indoor and outdoor scenes for social cluster behavior study

1 code implementation9 Apr 2022 Meihong Wu, Xiaoyan Cao, Xiaoyu Cao, Shihui Guo

Motion and interaction of social insects (such as ants) have been studied by many researchers to understand the clustering mechanism.

MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

1 code implementation16 Mar 2022 Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs.

Federated Learning Model Poisoning

FaceGuard: Proactive Deepfake Detection

no code implementations13 Sep 2021 Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang Gong

A key limitation of passive detection is that it cannot detect fake faces that are generated by new deepfake generation methods.

DeepFake Detection Face Swapping

IGrow: A Smart Agriculture Solution to Autonomous Greenhouse Control

1 code implementation6 Jul 2021 Xiaoyan Cao, Yao Yao, Lanqing Li, Wanpeng Zhang, Zhicheng An, Zhong Zhang, Li Xiao, Shihui Guo, Xiaoyu Cao, Meihong Wu, Dijun Luo

However, the optimal control of autonomous greenhouses is challenging, requiring decision-making based on high-dimensional sensory data, and the scaling of production is limited by the scarcity of labor capable of handling this task.

Cloud Computing Decision Making

Understanding the Security of Deepfake Detection

no code implementations5 Jul 2021 Xiaoyu Cao, Neil Zhenqiang Gong

Existing studies mainly focused on improving the detection performance in non-adversarial settings, leaving security of deepfake detection in adversarial settings largely unexplored.

DeepFake Detection Face Swapping

Provably Secure Federated Learning against Malicious Clients

no code implementations3 Feb 2021 Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.

Federated Learning Human Activity Recognition

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

1 code implementation27 Dec 2020 Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong

Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.

Federated Learning

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks

no code implementations7 Dec 2020 Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong

Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses.

Data Poisoning

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

no code implementations ICLR 2022 Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong

For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.

Recommendation Systems

Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation

no code implementations24 Aug 2020 Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.

Cryptography and Security

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

1 code implementation11 Aug 2020 Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong

Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold.

Data Poisoning Ensemble Learning

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

no code implementations26 Feb 2020 Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.

Backdoor Attack

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

1 code implementation ICLR 2020 Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong

For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

no code implementations26 Nov 2019 Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.

BIG-bench Machine Learning Data Poisoning +2

Data Poisoning Attacks to Local Differential Privacy Protocols

no code implementations5 Nov 2019 Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics.

Data Poisoning Cryptography and Security Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.