Search Results for author: Yabo Duan

Found 2 papers, 1 papers with code

Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models

1 code implementation10 Jun 2022 Zhiquan Lai, Shengwei Li, Xudong Tang, Keshi Ge, Weijie Liu, Yabo Duan, Linbo Qiao, Dongsheng Li

These features make it necessary to apply 3D parallelism, which integrates data parallelism, pipeline model parallelism and tensor model parallelism, to achieve high training efficiency.

EmbRace: Accelerating Sparse Communication for Distributed Training of NLP Neural Networks

no code implementations18 Oct 2021 Shengwei Li, Zhiquan Lai, Dongsheng Li, Yiming Zhang, Xiangyu Ye, Yabo Duan

EmbRace introduces Sparsity-aware Hybrid Communication, which integrates AlltoAll and model parallelism into data-parallel training, so as to reduce the communication overhead of highly sparse parameters.

Image Classification Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.