Federated Learning
1237 papers with code • 12 benchmarks • 11 datasets
Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.
This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.
Libraries
Use these libraries to find Federated Learning models and implementationsDatasets
Most implemented papers
Inverting Gradients -- How easy is it to break privacy in federated learning?
The idea of federated learning is to collaboratively train a neural network on a server.
Model-Contrastive Federated Learning
A key challenge in federated learning is to handle the heterogeneity of local data distribution across parties.
Differentially Private Federated Learning: A Client Level Perspective
In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model.
Federated Learning for Mobile Keyboard Prediction
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.
Adaptive Federated Optimization
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data.
FedML: A Research Library and Benchmark for Federated Machine Learning
Federated learning (FL) is a rapidly growing research field in machine learning.
A generic framework for privacy preserving deep learning
We detail a new framework for privacy preserving deep learning and discuss its assets.
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
Learning Private Neural Language Modeling with Attentive Aggregation
Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server.
Federated Optimization for Heterogeneous Networks
Federated learning involves training and effectively combining machine learning models from distributed partitions of data (i. e., tasks) on edge devices, and be naturally viewed as a multi- task learning problem.