Search Results for author: Shuman Tian

Found 1 papers, 0 papers with code

Meta-Ensemble Parameter Learning

no code implementations5 Oct 2022 Zhengcong Fei, Shuman Tian, Junshi Huang, Xiaoming Wei, Xiaolin Wei

Knowledge distillation is an approach that allows a single model to efficiently capture the approximate performance of an ensemble while showing poor scalability as demand for re-training when introducing new teacher models.

Knowledge Distillation Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.