Rethinking Client Reweighting for Selfish Federated Learning

29 Sep 2021  ·  Ruichen Luo, Shoubo Hu, Lequan Yu ·

Most federated learning (FL) algorithms aim to learn a model which achieves optimal overall performance across all clients. However, for some clients, the model obtained by conventional federated training may perform even worse than that obtained by local training. Therefore, for a stakeholder who only cares about the performance of a few $\textit{internal clients}$, the outcome of conventional federated learning may be unsatisfactory. To this end, we study a new $\textit{selfish}$ variant of federated learning, in which the ultimate objective is to learn a model with optimal performance on internal clients $\textit{alone}$ instead of all clients. We further propose Variance Reduction Selfish Learning (VaRSeL), a novel algorithm that reweights the external clients based on variance reduction for learning a model desired in this setting. Within each round of federated training, it guides the model to update towards the direction favored by the internal clients. We give a convergence analysis for both strongly-convex and non-convex cases, highlighting its fine-tune effect. Finally, we perform extensive experiments on both synthesized and real-world datasets, covering image classification, language modeling, and medical image segmentation. Experimental results empirically justify our theoretical results and show the advantage of VaRSeL over related FL algorithms.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here