Locally Linear Region Knowledge Distillation

9 Oct 2020  ·  Xiang Deng, Zhongfei, Zhang ·

Knowledge distillation (KD) is an effective technique to transfer knowledge from one neural network (teacher) to another (student), thus improving the performance of the student. To make the student better mimic the behavior of the teacher, the existing work focuses on designing different criteria to align their logits or representations. Different from these efforts, we address knowledge distillation from a novel data perspective. We argue that transferring knowledge at sparse training data points cannot enable the student to well capture the local shape of the teacher function. To address this issue, we propose locally linear region knowledge distillation ($\rm L^2$RKD) which transfers the knowledge in local, linear regions from a teacher to a student. This is achieved by enforcing the student to mimic the outputs of the teacher function in local, linear regions. To the end, the student is able to better capture the local shape of the teacher function and thus achieves a better performance. Despite its simplicity, extensive experiments demonstrate that $\rm L^2$RKD is superior to the original KD in many aspects as it outperforms KD and the other state-of-the-art approaches by a large margin, shows robustness and superiority under few-shot settings, and is more compatible with the existing distillation approaches to further improve their performances significantly.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods