Search Results for author: Rawad Bitar

Found 14 papers, 0 papers with code

Scalable and Reliable Over-the-Air Federated Edge Learning

no code implementations16 Jul 2024 Maximilian Egger, Christoph Hofmeister, Cem Kaya, Rawad Bitar, Antonia Wachter-Zeh

However, FEEL still suffers from a communication bottleneck due to the transmission of high-dimensional model updates from the clients to the federator.

Self-Duplicating Random Walks for Resilient Decentralized Learning on Graphs

no code implementations16 Jul 2024 Maximilian Egger, Ghadir Ayache, Rawad Bitar, Antonia Wachter-Zeh, Salim El Rouayheb

We propose a decentralized algorithm called DECAFORK that can maintain the number of RWs in the graph around a desired value even in the presence of arbitrary RW failures.

Communication-Efficient Byzantine-Resilient Federated Zero-Order Optimization

no code implementations20 Jun 2024 Afonso de Sá Delgado Neto, Maximilian Egger, Mayank Bakshi, Rawad Bitar

We introduce CYBER-0, the first zero-order optimization algorithm for memory-and-communication efficient Federated Learning, resilient to Byzantine faults.

Federated Learning

LoByITFL: Low Communication Secure and Private Federated Learning

no code implementations29 May 2024 Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar

Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients.

Federated Learning

Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises

no code implementations14 May 2024 Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar

We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users.

Federated Learning Privacy Preserving

Maximal-Capacity Discrete Memoryless Channel Identification

no code implementations18 Jan 2024 Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger

Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence.

Private Aggregation in Hierarchical Wireless Federated Learning with Partial and Full Collusion

no code implementations25 Jun 2023 Maximilian Egger, Christoph Hofmeister, Antonia Wachter-Zeh, Rawad Bitar

Existing literature focuses on private aggregation schemes that tackle the privacy problem in federated learning in settings where all users are connected to each other and to the federator.

Federated Learning

Fast and Straggler-Tolerant Distributed SGD with Reduced Computation Load

no code implementations17 Apr 2023 Maximilian Egger, Serge Kas Hanna, Rawad Bitar

Considering this model, we construct a novel scheme that adapts both the number of workers and the computation load throughout the run-time of the algorithm.

Nested Gradient Codes for Straggler Mitigation in Distributed Machine Learning

no code implementations16 Dec 2022 Luis Maßny, Christoph Hofmeister, Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh

Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency.

Scheduling

Adaptive Stochastic Gradient Descent for Fast and Communication-Efficient Distributed Learning

no code implementations4 Aug 2022 Serge Kas Hanna, Rawad Bitar, Parimal Parag, Venkat Dasari, Salim El Rouayheb

Moreover, the results also show that the adaptive version is communication-efficient, where the amount of communication required between the master and the workers is less than that of non-adaptive versions.

Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits

no code implementations16 Feb 2022 Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz

We consider the distributed SGD problem, where a main node distributes gradient calculations among $n$ workers.

Multi-Armed Bandits

Multiple Criss-Cross Insertion and Deletion Correcting Codes

no code implementations4 Feb 2021 Lorenz Welter, Rawad Bitar, Antonia Wachter-Zeh, Eitan Yaakobi

We show an equivalence between correcting $t$-criss-cross deletions and $t$-criss-cross insertions and show that a code correcting $t$-criss-cross insertions/deletions has redundancy at least $tn + t \log n - \log(t!)$.

Information Theory Information Theory

Adaptive Private Distributed Matrix Multiplication

no code implementations14 Jan 2021 Rawad Bitar, Marvin Xhemrishi, Antonia Wachter-Zeh

A master server owns two private matrices $\mathbf{A}$ and $\mathbf{B}$ and hires worker nodes to help computing their multiplication.

Information Theory Information Theory

Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers

no code implementations25 Feb 2020 Serge Kas Hanna, Rawad Bitar, Parimal Parag, Venkat Dasari, Salim El Rouayheb

One solution studied in the literature is to wait at each iteration for the responses of the fastest $k<n$ workers before updating the model, where $k$ is a fixed parameter.

Cannot find the paper you are looking for? You can Submit a new open access paper.