Search Results for author: Chupeng Cui

Found 3 papers, 0 papers with code

Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer

no code implementations29 Apr 2021 Zhiyuan Wu, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi

To further improve the robustness of the student, we extend SD to Enhanced Spirit Distillation (ESD) in exploiting a more comprehensive knowledge by introducing the proximity domain which is similar to the target domain for feature extraction.

General Knowledge Knowledge Distillation +2

Spirit Distillation: Precise Real-time Semantic Segmentation of Road Scenes with Insufficient Data

no code implementations25 Mar 2021 Zhiyuan Wu, Yu Jiang, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi

Inspired by the ideas of Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge distillation, we propose a new knowledge distillation method for cross-domain knowledge transference and efficient data-insufficient network training, named Spirit Distillation(SD), which allow the student network to mimic the teacher network to extract general features, so that a compact and accurate student network can be trained for real-time semantic segmentation of road scenes.

Autonomous Driving Few-Shot Learning +4

Activation Map Adaptation for Effective Knowledge Distillation

no code implementations26 Oct 2020 Zhiyuan Wu, Hong Qi, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue

Model compression becomes a recent trend due to the requirement of deploying neural networks on embedded and mobile devices.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.