Search Results for author: Ahmad Rammal

Found 2 papers, 1 papers with code

Correlated Quantization for Faster Nonconvex Distributed Optimization

no code implementations10 Jan 2024 Andrei Panferov, Yury Demidovich, Ahmad Rammal, Peter Richtárik

We analyze the forefront distributed non-convex optimization algorithm MARINA (Gorbunov et al., 2022) utilizing the proposed correlated quantizers and show that it outperforms the original MARINA and distributed SGD of Suresh et al. (2022) with regard to the communication complexity.

Distributed Optimization Quantization

Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates

1 code implementation15 Oct 2023 Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtárik

Byzantine robustness is an essential feature of algorithms for certain distributed optimization problems, typically encountered in collaborative/federated learning.

Distributed Optimization Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.