Search Results for author: Minghong Fang

Found 16 papers, 2 papers with code

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

no code implementations5 Mar 2024 Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong

Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data.

Federated Learning

GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis

1 code implementation21 Feb 2024 Yueqi Xie, Minghong Fang, Renjie Pi, Neil Gong

In this study, we propose GradSafe, which effectively detects unsafe prompts by scrutinizing the gradients of safety-critical parameters in LLMs.

Poisoning Federated Recommender Systems with Fake Users

no code implementations18 Feb 2024 Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong

Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity.

Federated Learning Recommendation Systems

Competitive Advantage Attacks to Decentralized Federated Learning

no code implementations20 Oct 2023 Yuqi Jia, Minghong Fang, Neil Zhenqiang Gong

In SelfishAttack, a set of selfish clients aim to achieve competitive advantages over the remaining non-selfish ones, i. e., the final learnt local models of the selfish clients are more accurate than those of the non-selfish ones.

Federated Learning

FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data

no code implementations13 Dec 2022 Minghong Fang, Jia Liu, Michinari Momma, Yi Sun

In this paper, we propose a new approach called fair recommendation with optimized antidote data (FairRoad), which aims to improve the fairness performances of recommender systems through the construction of a small and carefully crafted antidote dataset.

Fairness Recommendation Systems

AFLGuard: Byzantine-robust Asynchronous Federated Learning

no code implementations13 Dec 2022 Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley

Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates.

Federated Learning

NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

no code implementations17 Aug 2022 Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu

Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.

Federated Learning Open-Ended Question Answering

Data Poisoning Attacks and Defenses to Crowdsourcing Systems

no code implementations18 Feb 2021 Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu

Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

Data Poisoning

Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning

no code implementations ICLR 2021 Haibo Yang, Minghong Fang, Jia Liu

Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.

Federated Learning Open-Ended Question Answering

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

1 code implementation27 Dec 2020 Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong

Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.

Federated Learning

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

no code implementations19 Feb 2020 Minghong Fang, Neil Zhenqiang Gong, Jia Liu

Given the number of fake users the attacker can inject, we formulate the crafting of rating scores for the fake users as an optimization problem.

Data Poisoning Recommendation Systems

Toward Low-Cost and Stable Blockchain Networks

no code implementations19 Feb 2020 Minghong Fang, Jia Liu

To address the high mining cost problem of blockchain networks, in this paper, we propose a blockchain mining resources allocation algorithm to reduce the mining cost in PoW-based (proof-of-work-based) blockchain networks.

Private and Communication-Efficient Edge Learning: A Sparse Differential Gaussian-Masking Distributed SGD Approach

no code implementations12 Jan 2020 Xin Zhang, Minghong Fang, Jia Liu, Zhengyuan Zhu

In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing.

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

no code implementations26 Nov 2019 Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.

BIG-bench Machine Learning Data Poisoning +2

Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach

no code implementations10 Sep 2019 Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu

In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.

Poisoning Attacks to Graph-Based Recommender Systems

no code implementations11 Sep 2018 Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu

To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.