Search Results for author: Minghong Fang

Found 9 papers, 1 papers with code

Data Poisoning Attacks and Defenses to Crowdsourcing Systems

no code implementations18 Feb 2021 Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu

Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

Data Poisoning

Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning

no code implementations ICLR 2021 Haibo Yang, Minghong Fang, Jia Liu

Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.

Federated Learning

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

1 code implementation27 Dec 2020 Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong

Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.

Federated Learning

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

no code implementations19 Feb 2020 Minghong Fang, Neil Zhenqiang Gong, Jia Liu

Given the number of fake users the attacker can inject, we formulate the crafting of rating scores for the fake users as an optimization problem.

Data Poisoning Recommendation Systems

Toward Low-Cost and Stable Blockchain Networks

no code implementations19 Feb 2020 Minghong Fang, Jia Liu

To address the high mining cost problem of blockchain networks, in this paper, we propose a blockchain mining resources allocation algorithm to reduce the mining cost in PoW-based (proof-of-work-based) blockchain networks.

Private and Communication-Efficient Edge Learning: A Sparse Differential Gaussian-Masking Distributed SGD Approach

no code implementations12 Jan 2020 Xin Zhang, Minghong Fang, Jia Liu, Zhengyuan Zhu

In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing.

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

no code implementations26 Nov 2019 Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.

BIG-bench Machine Learning Data Poisoning +2

Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach

no code implementations10 Sep 2019 Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu

In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.

Poisoning Attacks to Graph-Based Recommender Systems

no code implementations11 Sep 2018 Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu

To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.