Search Results for author: Chengjian Liu

Found 2 papers, 2 papers with code

DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining

1 code implementation24 Feb 2023 Lin Zhang, Shaohuai Shi, Xiaowen Chu, Wei Wang, Bo Li, Chengjian Liu

Communication scheduling has been shown to be effective in accelerating distributed training, which enables all-reduce communications to be overlapped with backpropagation computations.

Scheduling

A Quantitative Survey of Communication Optimizations in Distributed Deep Learning

1 code implementation27 May 2020 Shaohuai Shi, Zhenheng Tang, Xiaowen Chu, Chengjian Liu, Wei Wang, Bo Li

In this article, we present a quantitative survey of communication optimization techniques for data parallel distributed DL.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.