Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients

NeurIPS 2019 Jun SunTianyi ChenGeorgios B. GiannakisZaiyue Yang

The present paper develops a novel aggregated gradient approach for distributed machine learning that adaptively compresses the gradient communication. The key idea is to first quantize the computed gradients, and then skip less informative quantized gradient communications by reusing outdated gradients... (read more)

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet