no code implementations • 5 Dec 2018 • Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi
We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.
no code implementations • 25 Mar 2019 • Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi
Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.
no code implementations • 30 Jul 2019 • Hamed F. Langroudi, Zachariah Carmichael, Dhireesha Kudithipudi
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5.. 8]-bit).
no code implementations • 6 Aug 2019 • Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, Dhireesha Kudithipudi
Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats.
no code implementations • 6 Apr 2021 • Hamed F. Langroudi, Vedant Karia, Tej Pandit, Dhireesha Kudithipudi
In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models.
no code implementations • 20 Oct 2018 • Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi
Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.