Improving Knowledge Distillation via Regularizing Feature Norm and Direction

26 May 2023  Β·  Yuzhu Wang, Lechao Cheng, Manni Duan, Yongheng Wang, Zunlei Feng, Shu Kong Β·

Knowledge distillation (KD) exploits a large well-trained model (i.e., teacher) to train a small student model on the same dataset for the same task. Treating teacher features as knowledge, prevailing methods of knowledge distillation train student by aligning its features with the teacher's, e.g., by minimizing the KL-divergence between their logits or L2 distance between their intermediate features. While it is natural to believe that better alignment of student features to the teacher better distills teacher knowledge, simply forcing this alignment does not directly contribute to the student's performance, e.g., classification accuracy. In this work, we propose to align student features with class-mean of teacher features, where class-mean naturally serves as a strong classifier. To this end, we explore baseline techniques such as adopting the cosine distance based loss to encourage the similarity between student features and their corresponding class-means of the teacher. Moreover, we train the student to produce large-norm features, inspired by other lines of work (e.g., model pruning and domain adaptation), which find the large-norm features to be more significant. Finally, we propose a rather simple loss term (dubbed ND loss) to simultaneously (1) encourage student to produce large-\emph{norm} features, and (2) align the \emph{direction} of student features and teacher class-means. Experiments on standard benchmarks demonstrate that our explored techniques help existing KD methods achieve better performance, i.e., higher classification accuracy on ImageNet and CIFAR100 datasets, and higher detection precision on COCO dataset. Importantly, our proposed ND loss helps the most, leading to the state-of-the-art performance on these benchmarks. The source code is available at \url{https://github.com/WangYZ1608/Knowledge-Distillation-via-ND}.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Knowledge Distillation CIFAR-100 ReviewKD++(T:resnet-32x4, S:shufflenet-v1) Top-1 Accuracy (%) 77.68 # 5
Knowledge Distillation CIFAR-100 ReviewKD++(T:resnet-32x4, S:shufflenet-v2) Top-1 Accuracy (%) 77.93 # 4
Knowledge Distillation CIFAR-100 DKD++(T:resnet50, S:mobilenetv2) Top-1 Accuracy (%) 70.82 # 24
Knowledge Distillation CIFAR-100 KD++(T:resnet56, S:resnet20) Top-1 Accuracy (%) 72.53 # 20
Knowledge Distillation CIFAR-100 ReviewKD++(T:WRN-40-2, S:WRN-40-1) Top-1 Accuracy (%) 75.66 # 11
Knowledge Distillation CIFAR-100 DKD++(T:resnet-32x4, S:resnet-8x4) Top-1 Accuracy (%) 76.28 # 9
Knowledge Distillation COCO 2017 val ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(resnet18)) mAP 37.43 # 2
AP@0.5 57.96 # 2
AP@0.75 40.15 # 2
Knowledge Distillation COCO 2017 val ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(resnet50)) mAP 41.03 # 1
AP@0.5 61.80 # 1
AP@0.75 44.94 # 1
Knowledge Distillation COCO 2017 val ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(mobilenet-v2)) mAP 34.51 # 3
AP@0.5 55.18 # 3
AP@0.75 37.21 # 3
Knowledge Distillation ImageNet KD++(T:resnet152 S:resnet34) Top-1 accuracy % 75.53 # 14
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T:resnet-152 S:resnet-101) Top-1 accuracy % 79.15 # 6
model size 44.5M # 4
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T: ViT-S, S:resnet18) Top-1 accuracy % 71.46 # 35
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T:ViT-B, S:resnet18) Top-1 accuracy % 71.84 # 28
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T: ResNet-34 S:ResNet-18) Top-1 accuracy % 72.07 # 23
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T:resnet50 S:resnet18) Top-1 accuracy % 72.53 # 19
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T:renset101 S:resnet18) Top-1 accuracy % 72.54 # 17
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T:resnet-152 S:resnet18) Top-1 accuracy % 72.54 # 17
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T:resnet-152 S:resnet-50) Top-1 accuracy % 77.48 # 9
CRD training setting ✘ # 1
Knowledge Distillation ImageNet KD++(T: regnety-16GF S:ViT-B) Top-1 accuracy % 83.60 # 1
model size 87M # 2
CRD training setting ✘ # 1
Knowledge Distillation ImageNet ReviewKD++(T:resnet50, S:mobilenet-v1) Top-1 accuracy % 72.96 # 16

Methods