no code implementations • 9 Jul 2024 • Yuqi Jia, Minghong Fang, Hongbin Liu, Jinghuai Zhang, Neil Zhenqiang Gong
Existing defenses mainly focus on protecting the training phase of FL such that the learnt global model is poison free.
no code implementations • 14 Jun 2024 • Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Gong
However, due to its fully decentralized nature, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients.
no code implementations • 4 May 2024 • Haibo Yang, Peiwen Qiu, Prashant Khanduri, Minghong Fang, Jia Liu
A popular approach to mitigate impacts of incomplete client participation is the server-assisted federated learning (SA-FL) framework, where the server is equipped with an auxiliary dataset.
no code implementations • 22 Apr 2024 • Zifan Zhang, Minghong Fang, Jiayuan Huang, Yuchen Liu
Federated Learning (FL) offers a distributed framework to train a global control model across multiple base stations without compromising the privacy of their local network data.
no code implementations • 5 Mar 2024 • Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong
Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data.
1 code implementation • 21 Feb 2024 • Yueqi Xie, Minghong Fang, Renjie Pi, Neil Gong
In this study, we propose GradSafe, which effectively detects jailbreak prompts by scrutinizing the gradients of safety-critical parameters in LLMs.
no code implementations • 18 Feb 2024 • Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity.
no code implementations • 20 Oct 2023 • Yuqi Jia, Minghong Fang, Neil Zhenqiang Gong
In SelfishAttack, a set of selfish clients aim to achieve competitive advantages over the remaining non-selfish ones, i. e., the final learnt local models of the selfish clients are more accurate than those of the non-selfish ones.
no code implementations • 13 Dec 2022 • Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley
Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates.
no code implementations • 13 Dec 2022 • Minghong Fang, Jia Liu, Michinari Momma, Yi Sun
In this paper, we propose a new approach called fair recommendation with optimized antidote data (FairRoad), which aims to improve the fairness performances of recommender systems through the construction of a small and carefully crafted antidote dataset.
no code implementations • 17 Aug 2022 • Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu
Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.
no code implementations • 18 Feb 2021 • Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu
Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.
no code implementations • ICLR 2021 • Haibo Yang, Minghong Fang, Jia Liu
Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.
1 code implementation • 27 Dec 2020 • Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong
Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.
no code implementations • 19 Feb 2020 • Minghong Fang, Neil Zhenqiang Gong, Jia Liu
Given the number of fake users the attacker can inject, we formulate the crafting of rating scores for the fake users as an optimization problem.
no code implementations • 19 Feb 2020 • Minghong Fang, Jia Liu
To address the high mining cost problem of blockchain networks, in this paper, we propose a blockchain mining resources allocation algorithm to reduce the mining cost in PoW-based (proof-of-work-based) blockchain networks.
no code implementations • 12 Jan 2020 • Xin Zhang, Minghong Fang, Jia Liu, Zhengyuan Zhu
In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing.
no code implementations • 26 Nov 2019 • Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.
no code implementations • 10 Sep 2019 • Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu
In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.
no code implementations • 11 Sep 2018 • Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu
To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users.