Search Results for author: Chaofei Wang

Found 6 papers, 3 papers with code

Efficient Knowledge Distillation from Model Checkpoints

1 code implementation12 Oct 2022 Chaofei Wang, Qisen Yang, Rui Huang, Shiji Song, Gao Huang

Knowledge distillation is an effective approach to learn compact models (students) with the supervision of large and strong models (teachers).

Knowledge Distillation

Learning to Weight Samples for Dynamic Early-exiting Networks

1 code implementation17 Sep 2022 Yizeng Han, Yifan Pu, Zihang Lai, Chaofei Wang, Shiji Song, Junfen Cao, Wenhui Huang, Chao Deng, Gao Huang

Intuitively, easy samples, which generally exit early in the network during inference, should contribute more to training early classifiers.

Meta-Learning

Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment

1 code implementation CVPR 2022 Jiayu Xiao, Liang Li, Chaofei Wang, Zheng-Jun Zha, Qingming Huang

A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.

Learn From the Past: Experience Ensemble Knowledge Distillation

no code implementations25 Feb 2022 Chaofei Wang, Shaowei Zhang, Shiji Song, Gao Huang

We save a moderate number of intermediate models from the training process of the teacher model uniformly, and then integrate the knowledge of these intermediate models by ensemble technique.

Knowledge Distillation Transfer Learning

Fine-Grained Few Shot Learning with Foreground Object Transformation

no code implementations13 Sep 2021 Chaofei Wang, Shiji Song, Qisen Yang, Xiang Li, Gao Huang

As a data augmentation method, FOT can be conveniently applied to any existing few shot learning algorithm and greatly improve its performance on FG-FSL tasks.

Data Augmentation Few-Shot Learning +1

CAM-loss: Towards Learning Spatially Discriminative Feature Representations

no code implementations ICCV 2021 Chaofei Wang, Jiayu Xiao, Yizeng Han, Qisen Yang, Shiji Song, Gao Huang

The backbone of traditional CNN classifier is generally considered as a feature extractor, followed by a linear layer which performs the classification.

Few-Shot Learning Knowledge Distillation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.