268 papers with code • 0 benchmarks • 7 datasets
Federated Learning is a framework to train a centralized model for a task where the data is de-centralized across different devices/ silos.
This helps preserve privacy of data on various devices as only the weight updates are shared with the centralized model so the data can remain on each device and we can still train a model using that data.
We detail a new framework for privacy preserving deep learning and discuss its assets.
To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.
Federated learning (FL) is a rapidly growing research field in machine learning.
Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes.
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
However, in many social network scenarios, centralized federated learning is not applicable (e. g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable).
We first show that, norm attack, a simple method that uses the norm of the communicated gradients between the parties, can largely reveal the ground-truth labels from the participants.
Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server.