Search Results for author: Kunyuan Du

Found 2 papers, 0 papers with code

FTL: A universal framework for training low-bit DNNs via Feature Transfer

no code implementations ECCV 2020 Kunyuan Du, Ya zhang, Haibing Guan, Qi Tian, Shenggan Cheng, James Lin

Compared with low-bit models trained directly, the proposed framework brings 0. 5% to 3. 4% accuracy gains to three different quantization schemes.

Quantization Transfer Learning

From Quantized DNNs to Quantizable DNNs

no code implementations11 Apr 2020 Kunyuan Du, Ya zhang, Haibing Guan

This paper proposes Quantizable DNNs, a special type of DNNs that can flexibly quantize its bit-width (denoted as `bit modes' thereafter) during execution without further re-training.

Cannot find the paper you are looking for? You can Submit a new open access paper.