no code implementations • 14 Apr 2018 • Marc Ortiz, Adrián Cristal, Eduard Ayguadé, Marc Casas
The use of low-precision fixed-point arithmetic along with stochastic rounding has been proposed as a promising alternative to the commonly used 32-bit floating point arithmetic to enhance training neural networks training in terms of performance and energy efficiency.