Trading Redundancy for Communication: Speeding up Distributed SGD for Non-convex Optimization

International Conference on Machine Learning 2019 Farzin HaddadpourMohammad Mahdi KamaniMehrdad MahdaviViveck Cadambe

Communication overhead is one of the key challenges that hinder the scalability of distributed optimization algorithms to train large neural networks. In recent years, there has been a great deal of research to alleviate communication cost by compressing the gradient vector or using local updates and periodic model averaging... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet