no code implementations • 18 Feb 2021 • Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu
Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.
no code implementations • ICLR 2021 • Haibo Yang, Minghong Fang, Jia Liu
Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.
1 code implementation • 27 Dec 2020 • Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong
Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.
no code implementations • 19 Feb 2020 • Minghong Fang, Neil Zhenqiang Gong, Jia Liu
Given the number of fake users the attacker can inject, we formulate the crafting of rating scores for the fake users as an optimization problem.
no code implementations • 19 Feb 2020 • Minghong Fang, Jia Liu
To address the high mining cost problem of blockchain networks, in this paper, we propose a blockchain mining resources allocation algorithm to reduce the mining cost in PoW-based (proof-of-work-based) blockchain networks.
no code implementations • 12 Jan 2020 • Xin Zhang, Minghong Fang, Jia Liu, Zhengyuan Zhu
In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing.
no code implementations • 26 Nov 2019 • Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.
no code implementations • 10 Sep 2019 • Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu
In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.
no code implementations • 11 Sep 2018 • Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu
To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users.