Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback

NeurIPS 2019 Shuai ZhengZiyue HuangJames T. Kwok

Communication overhead is a major bottleneck hampering the scalability of distributed machine learning systems. Recently, there has been a surge of interest in using gradient compression to improve the communication efficiency of distributed neural network training... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.