PositNN: Tapered Precision Deep Learning Inference for the Edge

The performance of neural networks, especially the currently popular form of deep neural networks, is often limited by the underlying hardware. Computations in deep neural networks are expensive, have large memory footprint, and are power hungry. Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range. Recently proposed posit numerical format with tapered precision represents small values more accurately than the other formats. In this work, we propose an ultra-low precision deep neural network, PositNN, that uses posits during inference. The efficacy of PositNN is demonstrated on a deep neural network architecture with two datasets (MNIST, Fashion MNIST and Cifar-10), where an 8-bit PositNN outperforms other {5-8}-bit low-precision neural networks and a 32-bit floating point baseline network.

PDF Abstract
No code implementations yet. Submit your code now



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here