# SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization

13 May 2020Navjot SinghDeepesh DataJemin GeorgeSuhas Diggavi

In this paper, we consider the problem of communication-efficient decentralized training of large-scale machine learning models over a network. We propose and analyze SQuARM-SGD, an algorithm for decentralized training, which employs {\em momentum} and {\em compressed communication} between nodes regulated by a locally computable triggering condition in stochastic gradient descent (SGD)... (read more)

PDF Abstract

# Code Add Remove

No code implementations yet. Submit your code now