DPRed: Making Typical Activation and Weight Values Matter In Deep Learning Computing

17 Apr 2018  ·  Alberto Delmas, Sayeh Sharify, Patrick Judd, Kevin Siu, Milos Nikolic, Andreas Moshovos ·

We show that selecting a single data type (precision) for all values in Deep Neural Networks, even if that data type is different per layer, amounts to worst case design. Much shorter data types can be used if we target the common case by adjusting the precision at a much finer granularity. We propose Dynamic Precision Reduction (DPRed), where we group weights and activations and encode them using a precision specific to each group. The per group precisions are selected statically for the weights and dynamically by hardware for the activations. We exploit these precisions to reduce: 1) off-chip storage and off- and on-chip communication, and 2) execution time. DPRed compression reduces off-chip traffic to nearly 35% and 33% on average compared to no compression respectively for 16b and 8b models. This makes it possible to sustain higher performance for a given off-chip memory interface while also boosting energy efficiency. We also demonstrate designs where the time required to process each group of activations and/or weights scales proportionally to the precision they use for convolutional and fully-connected layers. This improves execution time and energy efficiency for both dense and sparse networks. We show the techniques work with 8-bit networks, where 1.82x and 2.81x speedups are achieved for two different hardware variants that take advantage of dynamic precision variability.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here