Search Results for author: Hamed F. Langroudi

Found 6 papers, 0 papers with code

Deep Positron: A Deep Neural Network Using the Posit Number System

no code implementations5 Dec 2018 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.

Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks

no code implementations25 Mar 2019 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.

Deep Learning Training on the Edge with Low-Precision Posits

no code implementations30 Jul 2019 Hamed F. Langroudi, Zachariah Carmichael, Dhireesha Kudithipudi

Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5.. 8]-bit).

Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge

no code implementations6 Aug 2019 Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, Dhireesha Kudithipudi

Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats.

Quantization

TENT: Efficient Quantization of Neural Networks on the tiny Edge with Tapered FixEd PoiNT

no code implementations6 Apr 2021 Hamed F. Langroudi, Vedant Karia, Tej Pandit, Dhireesha Kudithipudi

In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models.

Quantization

PositNN: Tapered Precision Deep Learning Inference for the Edge

no code implementations20 Oct 2018 Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi

Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.

Cannot find the paper you are looking for? You can Submit a new open access paper.