Fixed-Point Back-Propagation Training

CVPR 2020 Xishan Zhang Shaoli Liu Rui Zhang Chang Liu Di Huang Shiyi Zhou Jiaming Guo Qi Guo Zidong Du Tian Zhi Yunji Chen

Recent emerged quantization technique (i.e., using low bit-width fixed-point data instead of high bit-width floating-point data) has been applied to inference of deep neural networks for fast and efficient execution. However, directly applying quantization in training can cause significant accuracy loss, thus remaining an open challenge... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper