Retraining-Based Iterative Weight Quantization for Deep Neural Networks

29 May 2018  ·  Dongsoo Lee, Byeongwook Kim ·

Model compression has gained a lot of attention due to its ability to reduce hardware resource requirements significantly while maintaining accuracy of DNNs. Model compression is especially useful for memory-intensive recurrent neural networks because smaller memory footprint is crucial not only for reducing storage requirement but also for fast inference operations. Quantization is known to be an effective model compression method and researchers are interested in minimizing the number of bits to represent parameters. In this work, we introduce an iterative technique to apply quantization, presenting high compression ratio without any modifications to the training algorithm. In the proposed technique, weight quantization is followed by retraining the model with full precision weights. We show that iterative retraining generates new sets of weights which can be quantized with decreasing quantization loss at each iteration. We also show that quantization is efficiently able to leverage pruning, another effective model compression method. Implementation issues on combining the two methods are also addressed. Our experimental results demonstrate that an LSTM model using 1-bit quantized weights is sufficient for PTB dataset without any accuracy degradation while previous methods demand at least 2-4 bits for quantized weights.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods