A Distributed Hierarchical SGD Algorithm with Sparse Global Reduction

12 Mar 2019  ·  Fan Zhou, Guojing Cong ·

Reducing communication in training large-scale machine learning applications on distributed platform is still a big challenge. To address this issue, we propose a distributed hierarchical averaging stochastic gradient descent (Hier-AVG) algorithm with infrequent global reduction by introducing local reduction. As a general type of parallel SGD, Hier-AVG can reproduce several popular synchronous parallel SGD variants by adjusting its parameters. We show that Hier-AVG with infrequent global reduction can still achieve standard convergence rate for non-convex optimization problems. In addition, we show that more frequent local averaging with more participants involved can lead to faster training convergence. By comparing Hier-AVG with another popular distributed training algorithm K-AVG, we show that through deploying local averaging with fewer number of global averaging, Hier-AVG can still achieve comparable training speed while frequently get better test accuracy. This indicates that local averaging can serve as an alternative remedy to effectively reduce communication overhead when the number of learners is large. Experimental results of Hier-AVG with several state-of-the-art deep neural nets on CIFAR-10 and IMAGENET-1K are presented to validate our analysis and show its superiority.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods