1 code implementation • 14 Aug 2024 • Wujie Sun, Defang Chen, Siwei Lyu, Genlang Chen, Chun Chen, Can Wang
Recent research on knowledge distillation has increasingly focused on logit distillation because of its simplicity, effectiveness, and versatility in model compression.
1 code implementation • 11 Jan 2024 • Wujie Sun, Defang Chen, Jiawei Chen, Yan Feng, Chun Chen, Can Wang
Deep learning has witnessed significant advancements in recent years at the cost of increasing training, inference, and model storage overhead.
1 code implementation • 22 Nov 2022 • Wujie Sun, Defang Chen, Can Wang, Deshi Ye, Yan Feng, Chun Chen
Instead of aligning output images, we distill teacher's sharpened feature distribution into the student with a dataset-independent classifier, making the student focus on those important features to improve performance.