Search Results for author: Lijie Xu

Found 4 papers, 3 papers with code

Stochastic Gradient Descent without Full Data Shuffle

1 code implementation12 Jun 2022 Lijie Xu, Shuang Qiu, Binhang Yuan, Jiawei Jiang, Cedric Renggli, Shaoduo Gan, Kaan Kara, Guoliang Li, Ji Liu, Wentao Wu, Jieping Ye, Ce Zhang

In this paper, we first conduct a systematic empirical study on existing data shuffling strategies, which reveals that all existing strategies have room for improvement -- they all suffer in terms of I/O performance or convergence rate.

Computational Efficiency

Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning

1 code implementation16 Apr 2021 Shijian Li, Oren Mangoubi, Lijie Xu, Tian Guo

Further, we observe that Sync-Switch achieves 3. 8% higher converged accuracy with just 1. 23X the training time compared to training with ASP.

Speeding up Deep Learning with Transient Servers

no code implementations28 Feb 2019 Shijian Li, Robert J. Walls, Lijie Xu, Tian Guo

Distributed training frameworks, like TensorFlow, have been proposed as a means to reduce the training time of deep learning models by using a cluster of GPU servers.

Cannot find the paper you are looking for? You can Submit a new open access paper.