no code implementations • 7 Mar 2024 • Laurent Condat, Artavazd Maranjyan, Peter Richtárik
In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical.
1 code implementation • 28 Oct 2022 • Artavazd Maranjyan, Mher Safaryan, Peter Richtárik
We study a class of distributed optimization algorithms that aim to alleviate high communication costs by allowing the clients to perform multiple local gradient-type training steps prior to communication.