Personalized Federated Learning
48 papers with code • 7 benchmarks • 7 datasets
The federated learning setup presents numerous challenges including data heterogeneity (differences in data distribution), device heterogeneity (in terms of computation capabilities, network connection, etc.), and communication efficiency. Especially data heterogeneity makes it hard to learn a single shared global model that applies to all clients. To overcome these issues, Personalized Federated Learning (PFL) aims to personalize the global model for each client in the federation.
These leaderboards are used to track progress in Personalized Federated Learning
LibrariesUse these libraries to find Personalized Federated Learning models and implementations
Most implemented papers
Adaptive Personalized Federated Learning
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize.
Personalized Federated Learning with Moreau Envelopes
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data.
Exploiting Shared Representations for Personalized Federated Learning
Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
Ditto: Fair and Robust Federated Learning Through Personalization
Fairness and robustness are two important concerns for federated learning systems.
Personalized Federated Learning with First Order Model Optimization
While federated learning traditionally aims to train a single global model across decentralized local datasets, one model may not always be ideal for all participating clients.
A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization
Non-Independent and Identically Distributed (non- IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL).
Personalized Federated Learning using Hypernetworks
In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client.
On Bridging Generic and Personalized Federated Learning for Image Classification
On the one hand, we introduce a family of losses that are robust to non-identical class distributions, enabling clients to train a generic predictor with a consistent objective across them.
Federated Multi-Task Learning under a Mixture of Distributions
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models.