Search Results for author: Maximilian Egger

Found 10 papers, 0 papers with code

Scalable and Reliable Over-the-Air Federated Edge Learning

no code implementations16 Jul 2024 Maximilian Egger, Christoph Hofmeister, Cem Kaya, Rawad Bitar, Antonia Wachter-Zeh

However, FEEL still suffers from a communication bottleneck due to the transmission of high-dimensional model updates from the clients to the federator.

Self-Duplicating Random Walks for Resilient Decentralized Learning on Graphs

no code implementations16 Jul 2024 Maximilian Egger, Ghadir Ayache, Rawad Bitar, Antonia Wachter-Zeh, Salim El Rouayheb

We propose a decentralized algorithm called DECAFORK that can maintain the number of RWs in the graph around a desired value even in the presence of arbitrary RW failures.

Communication-Efficient Byzantine-Resilient Federated Zero-Order Optimization

no code implementations20 Jun 2024 Afonso de Sá Delgado Neto, Maximilian Egger, Mayank Bakshi, Rawad Bitar

We introduce CYBER-0, the first zero-order optimization algorithm for memory-and-communication efficient Federated Learning, resilient to Byzantine faults.

Federated Learning

LoByITFL: Low Communication Secure and Private Federated Learning

no code implementations29 May 2024 Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar

Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients.

Federated Learning

Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises

no code implementations14 May 2024 Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar

We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users.

Federated Learning Privacy Preserving

Maximal-Capacity Discrete Memoryless Channel Identification

no code implementations18 Jan 2024 Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger

Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence.

Private Aggregation in Hierarchical Wireless Federated Learning with Partial and Full Collusion

no code implementations25 Jun 2023 Maximilian Egger, Christoph Hofmeister, Antonia Wachter-Zeh, Rawad Bitar

Existing literature focuses on private aggregation schemes that tackle the privacy problem in federated learning in settings where all users are connected to each other and to the federator.

Federated Learning

Fast and Straggler-Tolerant Distributed SGD with Reduced Computation Load

no code implementations17 Apr 2023 Maximilian Egger, Serge Kas Hanna, Rawad Bitar

Considering this model, we construct a novel scheme that adapts both the number of workers and the computation load throughout the run-time of the algorithm.

Nested Gradient Codes for Straggler Mitigation in Distributed Machine Learning

no code implementations16 Dec 2022 Luis Maßny, Christoph Hofmeister, Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh

Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency.

Scheduling

Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits

no code implementations16 Feb 2022 Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz

We consider the distributed SGD problem, where a main node distributes gradient calculations among $n$ workers.

Multi-Armed Bandits

Cannot find the paper you are looking for? You can Submit a new open access paper.