no code implementations • 16 Jul 2024 • Maximilian Egger, Christoph Hofmeister, Cem Kaya, Rawad Bitar, Antonia Wachter-Zeh
However, FEEL still suffers from a communication bottleneck due to the transmission of high-dimensional model updates from the clients to the federator.
no code implementations • 16 Jul 2024 • Maximilian Egger, Ghadir Ayache, Rawad Bitar, Antonia Wachter-Zeh, Salim El Rouayheb
We propose a decentralized algorithm called DECAFORK that can maintain the number of RWs in the graph around a desired value even in the presence of arbitrary RW failures.
no code implementations • 20 Jun 2024 • Afonso de Sá Delgado Neto, Maximilian Egger, Mayank Bakshi, Rawad Bitar
We introduce CYBER-0, the first zero-order optimization algorithm for memory-and-communication efficient Federated Learning, resilient to Byzantine faults.
no code implementations • 29 May 2024 • Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar
Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients.
no code implementations • 14 May 2024 • Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar
We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users.
no code implementations • 18 Jan 2024 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger
Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence.
no code implementations • 25 Jun 2023 • Maximilian Egger, Christoph Hofmeister, Antonia Wachter-Zeh, Rawad Bitar
Existing literature focuses on private aggregation schemes that tackle the privacy problem in federated learning in settings where all users are connected to each other and to the federator.
no code implementations • 17 Apr 2023 • Maximilian Egger, Serge Kas Hanna, Rawad Bitar
Considering this model, we construct a novel scheme that adapts both the number of workers and the computation load throughout the run-time of the algorithm.
no code implementations • 16 Dec 2022 • Luis Maßny, Christoph Hofmeister, Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh
Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency.
no code implementations • 4 Aug 2022 • Serge Kas Hanna, Rawad Bitar, Parimal Parag, Venkat Dasari, Salim El Rouayheb
Moreover, the results also show that the adaptive version is communication-efficient, where the amount of communication required between the master and the workers is less than that of non-adaptive versions.
no code implementations • 16 Feb 2022 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz
We consider the distributed SGD problem, where a main node distributes gradient calculations among $n$ workers.
no code implementations • 4 Feb 2021 • Lorenz Welter, Rawad Bitar, Antonia Wachter-Zeh, Eitan Yaakobi
We show an equivalence between correcting $t$-criss-cross deletions and $t$-criss-cross insertions and show that a code correcting $t$-criss-cross insertions/deletions has redundancy at least $tn + t \log n - \log(t!)$.
Information Theory Information Theory
no code implementations • 14 Jan 2021 • Rawad Bitar, Marvin Xhemrishi, Antonia Wachter-Zeh
A master server owns two private matrices $\mathbf{A}$ and $\mathbf{B}$ and hires worker nodes to help computing their multiplication.
Information Theory Information Theory
no code implementations • 25 Feb 2020 • Serge Kas Hanna, Rawad Bitar, Parimal Parag, Venkat Dasari, Salim El Rouayheb
One solution studied in the literature is to wait at each iteration for the responses of the fastest $k<n$ workers before updating the model, where $k$ is a fixed parameter.