no code implementations • 14 May 2024 • Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar
We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users.
no code implementations • 18 Jan 2024 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger
Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence.
no code implementations • 25 Jun 2023 • Maximilian Egger, Christoph Hofmeister, Antonia Wachter-Zeh, Rawad Bitar
Federated learning collaboratively trains a neural network on privately owned data held by several participating clients.
no code implementations • 17 Apr 2023 • Maximilian Egger, Serge Kas Hanna, Rawad Bitar
Considering this model, we construct a novel scheme that adapts both the number of workers and the computation load throughout the run-time of the algorithm.
no code implementations • 16 Dec 2022 • Luis Maßny, Christoph Hofmeister, Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh
Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency.
no code implementations • 16 Feb 2022 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz
We consider the distributed SGD problem, where a main node distributes gradient calculations among $n$ workers.