no code implementations • 18 May 2023 • Juyoung Yun, Byungkon Kang, Francois Rameau, Zhoulai Fu
Contrary to literature that credits the success of noise-tolerated neural networks to regularization effects, our study-supported by a series of rigorous experiments-provides a quantitative explanation of why standalone IEEE 16-bit floating-point neural networks can perform on par with 32-bit and mixed-precision networks in various image classification tasks.
no code implementations • 30 Jan 2023 • Juyoung Yun, Byungkon Kang, Zhoulai Fu
Lowering the precision of neural networks from the prevalent 32-bit precision has long been considered harmful to performance, despite the gain in space and time.