Search Results for author: John L. Gustafson

Found 4 papers, 0 papers with code

Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks

no code implementations25 Mar 2019 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.

Deep Positron: A Deep Neural Network Using the Posit Number System

no code implementations5 Dec 2018 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.

PositNN: Tapered Precision Deep Learning Inference for the Edge

no code implementations20 Oct 2018 Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi

Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.

Tensors Come of Age: Why the AI Revolution will help HPC

no code implementations26 Sep 2017 John L. Gustafson, Lenore M. Mullin

This article discusses how the automation of tensor algorithms, based on A Mathematics of Arrays and Psi Calculus, and a new way to represent numbers, Unum Arithmetic, enables mechanically provable, scalable, portable, and more numerically accurate software.

Cannot find the paper you are looking for? You can Submit a new open access paper.