Search Results for author: Artavazd Maranjyan

Found 2 papers, 1 papers with code

LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression

no code implementations7 Mar 2024 Laurent Condat, Artavazd Maranjyan, Peter Richtárik

In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical.

Distributed Optimization Federated Learning +1

GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

1 code implementation28 Oct 2022 Artavazd Maranjyan, Mher Safaryan, Peter Richtárik

We study a class of distributed optimization algorithms that aim to alleviate high communication costs by allowing the clients to perform multiple local gradient-type training steps prior to communication.

Common Sense Reasoning Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.