OverQ: Opportunistic Outlier Quantization for Neural Network Accelerators

13 Oct 2019  ·  Ritchie Zhao, Jordan Dotzel, Zhanqiu Hu, Preslav Ivanov, Christopher De Sa, Zhiru Zhang ·

Outliers in weights and activations pose a key challenge for fixed-point quantization of neural networks. While they can be addressed by fine-tuning, this is not practical for ML service providers (e.g., Google or Microsoft) who often receive customer models without training data. Specialized hardware for handling activation outliers can enable low-precision neural networks, but at the cost of nontrivial area overhead. We instead propose overwrite quantization (OverQ), a lightweight hardware technique that opportunistically increases bitwidth for activation outliers by overwriting nearby zeros. It has two major modes of operation: range overwrite and precision overwrite. Range overwrite reallocates bits to increase the range of outliers, while precision overwrite reuses zeros to increase the precision of non-outlier values. Combining range overwrite with a simple cascading logic, we handle the vast majority of outliers to significantly improve model accuracy at low bitwidth. Our experiments show that with modest cascading, we can consistently handle over 90% of outliers and achieve +5% ImageNet Top-1 accuracy on a quantized ResNet-50 at 4 bits. Our ASIC prototype shows OverQ can be implemented efficiently on top of existing weight-stationary systolic arrays with small area increases per processing element. We imagine this technique can complement modern DNN accelerator designs to provide small increases in accuracy with insignificant area overhead.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here