no code implementations • 30 May 2022 • Haley Hoech, Roman Rischke, Karsten Müller, Wojciech Samek
Federated learning suffers in the case of non-iid local datasets, i. e., when the distributions of the clients' data are heterogeneous.
1 code implementation • 4 Feb 2021 • Felix Sattler, Tim Korjakow, Roman Rischke, Wojciech Samek
Federated Distillation (FD) is a popular novel algorithmic paradigm for Federated Learning, which achieves training performance competitive to prior parameter averaging based methods, while additionally allowing the clients to train different model architectures, by distilling the client predictions on an unlabeled auxiliary set of data into a student model.
no code implementations • 1 Dec 2020 • Felix Sattler, Arturo Marban, Roman Rischke, Wojciech Samek
Communication constraints are one of the major challenges preventing the wide-spread adoption of Federated Learning systems.