no code implementations • 31 May 2023 • Defang Chen, Zhenyu Zhou, Jian-Ping Mei, Chunhua Shen, Chun Chen, Can Wang
Recent years have witnessed significant progress in developing effective training and fast sampling techniques for diffusion models.
1 code implementation • CVPR 2022 • Defang Chen, Jian-Ping Mei, Hailin Zhang, Can Wang, Yan Feng, Chun Chen
Knowledge distillation aims to compress a powerful yet cumbersome teacher model into a lightweight student model without much sacrifice of performance.
Ranked #3 on Knowledge Distillation on CIFAR-100
2 code implementations • 6 Dec 2020 • Defang Chen, Jian-Ping Mei, Yuan Zhang, Can Wang, Yan Feng, Chun Chen
Knowledge distillation is a technique to enhance the generalization ability of a student model by exploiting outputs from a teacher model.
2 code implementations • 1 Dec 2019 • Defang Chen, Jian-Ping Mei, Can Wang, Yan Feng, Chun Chen
The second-level distillation is performed to transfer the knowledge in the ensemble of auxiliary peers further to the group leader, i. e., the model used for inference.
no code implementations • 16 Feb 2015 • Jian-Ping Mei, Chee-Keong Kwoh, Peng Yang, Xiao-Li Li
Classification is one of the most popular and widely used supervised learning tasks, which categorizes objects into predefined classes based on known knowledge.