Federated Learning
1056 papers with code • 12 benchmarks • 10 datasets
Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.
This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.
Libraries
Use these libraries to find Federated Learning models and implementationsDatasets
Most implemented papers
Communication-Efficient Learning of Deep Networks from Decentralized Data
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device.
Federated Optimization in Heterogeneous Networks
Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).
Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
In this work, we look at the effect such non-identical data distributions has on visual classification via Federated Learning.
Advances and Open Problems in Federated Learning
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
LEAF: A Benchmark for Federated Settings
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
We obtain tight convergence rates for FedAvg and prove that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.
Adaptive Personalized Federated Learning
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize.
Agnostic Federated Learning
A key learning scenario in large-scale applications is that of federated learning, where a centralized model is trained based on data originating from a large number of clients.
Inverting Gradients -- How easy is it to break privacy in federated learning?
The idea of federated learning is to collaboratively train a neural network on a server.
Differentially Private Federated Learning: A Client Level Perspective
In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model.