no code implementations • 26 Sep 2017 • John L. Gustafson, Lenore M. Mullin
This article discusses how the automation of tensor algorithms, based on A Mathematics of Arrays and Psi Calculus, and a new way to represent numbers, Unum Arithmetic, enables mechanically provable, scalable, portable, and more numerically accurate software.
no code implementations • 5 Dec 2018 • Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi
We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.
no code implementations • 25 Mar 2019 • Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi
Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.
no code implementations • 20 Oct 2018 • Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi
Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.