no code implementations • 16 Jul 2024 • Maximilian Egger, Christoph Hofmeister, Cem Kaya, Rawad Bitar, Antonia Wachter-Zeh
However, FEEL still suffers from a communication bottleneck due to the transmission of high-dimensional model updates from the clients to the federator.
no code implementations • 16 Jul 2024 • Maximilian Egger, Ghadir Ayache, Rawad Bitar, Antonia Wachter-Zeh, Salim El Rouayheb
We propose a decentralized algorithm called DECAFORK that can maintain the number of RWs in the graph around a desired value even in the presence of arbitrary RW failures.
no code implementations • 20 Jun 2024 • Afonso de Sá Delgado Neto, Maximilian Egger, Mayank Bakshi, Rawad Bitar
We introduce CYBER-0, the first zero-order optimization algorithm for memory-and-communication efficient Federated Learning, resilient to Byzantine faults.
no code implementations • 29 May 2024 • Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar
Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients.
no code implementations • 14 May 2024 • Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar
We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users.
no code implementations • 18 Jan 2024 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger
Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence.
no code implementations • 25 Jun 2023 • Maximilian Egger, Christoph Hofmeister, Antonia Wachter-Zeh, Rawad Bitar
Existing literature focuses on private aggregation schemes that tackle the privacy problem in federated learning in settings where all users are connected to each other and to the federator.
no code implementations • 17 Apr 2023 • Maximilian Egger, Serge Kas Hanna, Rawad Bitar
Considering this model, we construct a novel scheme that adapts both the number of workers and the computation load throughout the run-time of the algorithm.
no code implementations • 16 Dec 2022 • Luis Maßny, Christoph Hofmeister, Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh
Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency.
no code implementations • 16 Feb 2022 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz
We consider the distributed SGD problem, where a main node distributes gradient calculations among $n$ workers.