2 code implementations • 22 Aug 2023 • Xiaoyan Cao, Yiyao Zheng, Yao Yao, Huapeng Qin, Xiaoyu Cao, Shihui Guo
Existing trackers can be categorized into two association paradigms: single-feature paradigm (based on either motion or appearance feature) and serial paradigm (one feature serves as secondary while the other is primary).
no code implementations • 20 Oct 2022 • Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong
Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them.
no code implementations • 2 Oct 2022 • Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong
Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input.
1 code implementation • 19 Jul 2022 • Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients.
1 code implementation • 9 Apr 2022 • Meihong Wu, Xiaoyan Cao, Xiaoyu Cao, Shihui Guo
Motion and interaction of social insects (such as ants) have been studied by many researchers to understand the clustering mechanism.
1 code implementation • 16 Mar 2022 • Xiaoyu Cao, Neil Zhenqiang Gong
Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs.
no code implementations • 13 Sep 2021 • Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang Gong
A key limitation of passive detection is that it cannot detect fake faces that are generated by new deepfake generation methods.
1 code implementation • 6 Jul 2021 • Xiaoyan Cao, Yao Yao, Lanqing Li, Wanpeng Zhang, Zhicheng An, Zhong Zhang, Li Xiao, Shihui Guo, Xiaoyu Cao, Meihong Wu, Dijun Luo
However, the optimal control of autonomous greenhouses is challenging, requiring decision-making based on high-dimensional sensory data, and the scaling of production is limited by the scarcity of labor capable of handling this task.
no code implementations • 5 Jul 2021 • Xiaoyu Cao, Neil Zhenqiang Gong
Existing studies mainly focused on improving the detection performance in non-adversarial settings, leaving security of deepfake detection in adversarial settings largely unexplored.
no code implementations • 3 Feb 2021 • Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.
1 code implementation • 27 Dec 2020 • Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong
Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.
no code implementations • 7 Dec 2020 • Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong
Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses.
no code implementations • ICLR 2022 • Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
no code implementations • 24 Aug 2020 • Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong
Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.
Cryptography and Security
1 code implementation • 11 Aug 2020 • Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong
Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold.
no code implementations • 26 Feb 2020 • Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.
no code implementations • 9 Feb 2020 • Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong
However, several recent studies showed that community detection is vulnerable to adversarial structural perturbation.
1 code implementation • ICLR 2020 • Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong
For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).
no code implementations • 26 Nov 2019 • Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.
no code implementations • 5 Nov 2019 • Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics.
Data Poisoning
Cryptography and Security
Distributed, Parallel, and Cluster Computing
no code implementations • 28 Oct 2019 • Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Our key observation is that a DNN classifier can be uniquely represented by its classification boundary.
no code implementations • 17 Sep 2017 • Xiaoyu Cao, Neil Zhenqiang Gong
Our key observation is that adversarial examples are close to the classification boundary.