Search Results for author: Tiancheng Wen

Found 1 papers, 1 papers with code

Preparing Lessons: Improve Knowledge Distillation with Better Supervision

1 code implementation18 Nov 2019 Tiancheng Wen, Shenqi Lai, Xueming Qian

Knowledge distillation (KD) is widely used for training a compact model with the supervision of another large model, which could effectively improve the performance.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.