BML: A High-performance, Low-cost Gradient Synchronization Algorithm for DML Training

NeurIPS 2018 Songtao WangDan LiYang ChengJinkun GengYanshu WangShuai WangShu-Tao XiaJianping Wu

In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training. In this paper we propose BML, a new gradient synchronization algorithm with higher network performance and lower network cost than the current practice... (read more)

PDF Abstract


No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.