Federated Learning
1550 papers with code • 12 benchmarks • 11 datasets
Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.
This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.
Libraries
Use these libraries to find Federated Learning models and implementationsDatasets
Most implemented papers
Communication-Efficient Learning of Deep Networks from Decentralized Data
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device.
Federated Optimization in Heterogeneous Networks
Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).
Adaptive Personalized Federated Learning
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize.
Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
In this work, we look at the effect such non-identical data distributions has on visual classification via Federated Learning.
Advances and Open Problems in Federated Learning
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
LEAF: A Benchmark for Federated Settings
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.
Agnostic Federated Learning
A key learning scenario in large-scale applications is that of federated learning, where a centralized model is trained based on data originating from a large number of clients.
Towards Federated Learning at Scale: System Design
Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data.
FedMD: Heterogenous Federated Learning via Model Distillation
With 10 distinct participants, the final test accuracy of each model on average receives a 20% gain on top of what's possible without collaboration and is only a few percent lower than the performance each model would have obtained if all private datasets were pooled and made directly available for all participants.
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
We obtain tight convergence rates for FedAvg and prove that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.