Joint Regularization on Activations and Weights for Efficient Neural Network Pruning

19 Jun 2019  ·  Qing Yang, Wei Wen, Zuoguan Wang, Hai Li ·

With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency. This work aims to advance the compression beyond the weights to neuron activations. We propose the joint regularization technique which simultaneously regulates the distribution of weights and activations. By distinguishing and leveraging the significance difference among neuron responses and connections during learning, the jointly pruned network, namely \textit{JPnet}, optimizes the sparsity of activations and weights for improving execution efficiency. The derived deep sparsification of JPnet reveals more optimization space for the existing DNN accelerators dedicated for sparse matrix operations. We thoroughly evaluate the effectiveness of joint regularization through various network models with different activation functions and on different datasets. With $0.4\%$ degradation constraint on inference accuracy, a JPnet can save $72.3\% \sim 98.8\%$ of computation cost compared to the original dense models, with up to $5.2\times$ and $12.3\times$ reductions in activation and weight numbers, respectively.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods