Search Results for author: Guohao Dai

Found 6 papers, 2 papers with code

Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective

no code implementations18 Oct 2021 Hengrui Zhang, Zhongming Yu, Guohao Dai, Guyue Huang, Yufei Ding, Yuan Xie, Yu Wang

The same data are propagated through the graph structure to perform the same neural operation multiple times in GNNs, leading to redundant computation which accounts for 92. 4% of total operators.

Explore the Potential of CNN Low Bit Training

no code implementations1 Jan 2021 Kai Zhong, Xuefei Ning, Tianchen Zhao, Zhenhua Zhu, Shulin Zeng, Guohao Dai, Yu Wang, Huazhong Yang

Through this dynamic precision framework, we can reduce the bit-width of convolution, which is the most computational cost, while keeping the training process close to the full precision floating-point training.

Quantization

GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks

2 code implementations7 Jul 2020 Guyue Huang, Guohao Dai, Yu Wang, Huazhong Yang

GE-SpMM performs SpMM-like operation on sparse matrices represented in the most common Compressed Sparse Row (CSR) format, so it can be embedded in GNN frameworks with no preprocessing overheads and support general GNN algorithms.

Distributed, Parallel, and Cluster Computing

Exploring the Potential of Low-bit Training of Convolutional Neural Networks

no code implementations4 Jun 2020 Kai Zhong, Xuefei Ning, Guohao Dai, Zhenhua Zhu, Tianchen Zhao, Shulin Zeng, Yu Wang, Huazhong Yang

For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within $1\%$.

Quantization

Enabling Efficient and Flexible FPGA Virtualization for Deep Learning in the Cloud

no code implementations26 Mar 2020 Shulin Zeng, Guohao Dai, Hanbo Sun, Kai Zhong, Guangjun Ge, Kaiyuan Guo, Yu Wang, Huazhong Yang

Currently, the majority of FPGA-based DNN accelerators in the cloud run in a time-division multiplexing way for multiple users sharing a single FPGA, and require re-compilation with $\sim$100 s overhead.

Cannot find the paper you are looking for? You can Submit a new open access paper.