671 papers with code • 12 benchmarks • 8 datasets
Federated Learning is a framework to train a centralized model for a task where the data is de-centralized across different devices/ silos.
This helps preserve privacy of data on various devices as only the weight updates are shared with the centralized model so the data can remain on each device and we can still train a model using that data.
Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).
In this work, we look at the effect such non-identical data distributions has on visual classification via Federated Learning.
In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model.