Loss Aware Post-training Quantization

17 Nov 2019Yury NahshanBrian ChmielChaim BaskinEvgenii ZheltonozhskiiRon BannerAlex M. BronsteinAvi Mendelson

Neural network quantization enables the deployment of large models on resource-constrained devices. Current post-training quantization methods fall short in terms of accuracy for INT4 (or lower) but provide reasonable accuracy for INT8 (or above)... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.