no code implementations • 14 Mar 2021 • Cheng Luo, Lei Qu, Youshan Miao, Peng Cheng, Yongqiang Xiong
Distributed deep learning workloads include throughput-intensive training tasks on the GPU clusters, where the Distributed Stochastic Gradient Descent (SGD) incurs significant communication delays after backward propagation, forces workers to wait for the gradient synchronization via a centralized parameter server or directly in decentralized workers.
no code implementations • 14 May 2019 • Lei Qu, Changfeng Wu, Liang Zou
Considering the training data is often limited in biomedical tasks, a tradeoff has to be made between model size and its representational power.
no code implementations • 27 Sep 2016 • Kangru Wang, Lei Qu, Lili Chen, Yuzhang Gu, DongChen zhu, Xiaolin Zhang
The main contribution of this paper is a newly proposed descriptor which is implemented in the disparity image to obtain a disparity texture image.