Search Results for author: Qinggang Zhou

Found 3 papers, 1 papers with code

Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging

no code implementations29 Sep 2021 Pengcheng Li, Yixin Guo, Yawen Zhang, Qinggang Zhou

Mini-batch Stochastic Gradient Descent (SGD) requires workers to halt forward/backward propagations, to wait for gradients synchronized among all workers before the next batch of tasks.

Exploiting Invariance in Training Deep Neural Networks

1 code implementation30 Mar 2021 Chengxi Ye, Xiong Zhou, Tristan McKinney, Yanfeng Liu, Qinggang Zhou, Fedor Zhdanov

Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform technique that imposes invariance properties in the training of deep neural networks.

Image Classification Object Detection +1

DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging

no code implementations31 May 2020 Qinggang Zhou, Yawen Zhang, Pengcheng Li, Xiaoyong Liu, Jun Yang, Runsheng Wang, Ru Huang

The state-of-the-art deep learning algorithms rely on distributed training systems to tackle the increasing sizes of models and training data sets.

Cannot find the paper you are looking for? You can Submit a new open access paper.