Accelerating convolutional neural network by exploiting sparsity on GPUs

22 Sep 2019  ·  Weizhi Xu, Yintai Sun, fhengyu Fan, Hui Yu, Xin Fu ·

Convolutional neural network (CNN) is an important deep learning method. The convolution operation takes a large proportion of the total execution time for CNN. Feature maps for convolution operation are usually sparse. Multiplications and additions for zero values in the feature map are useless for convolution results. In addition, the convolution layer and pooling layer are computed separately in traditional methods, which leads to frequent data transfer between CPU and GPU. Based on these observations, we propose two new methods to accelerate CNN on GPUs. The first method focuses on accelerating convolution operation and reducing the calculation of zero values. The second method combines the operations of one convolution layer with the following pooling layer to effectively reduce traffic between CPU and GPU. For the first method, we extract some convolution layers from LeNet, AlexNet, and GoogLeNet, and can achieve up to 3.6X speedup over cuDNN for the single-layer convolution on GPU. Experiment on VGG-19 achieves 3.5X speedup over cuDNN for convolution operation on average. For the second method, the experiment on VGG-19 achieves 4.3X speedup over cuDNN on average.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper