Search Results for author: Chengming Hu

Found 6 papers, 0 papers with code

Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation

no code implementations22 Dec 2023 Chengming Hu, Haolun Wu, Xuan Li, Chen Ma, Xi Chen, Jun Yan, Boyu Wang, Xue Liu

A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio in a bilevel-optimization manner.

Bilevel Optimization Click-Through Rate Prediction +2

Teacher-Student Architecture for Knowledge Distillation: A Survey

no code implementations8 Aug 2023 Chengming Hu, Xuan Li, Dan Liu, Haolun Wu, Xi Chen, Ju Wang, Xue Liu

Recently, Teacher-Student architectures have been effectively and widely embraced on various knowledge distillation (KD) objectives, including knowledge compression, knowledge expansion, knowledge adaptation, and knowledge enhancement.

Knowledge Distillation regression

Phase Matching for Out-of-Distribution Generalization

no code implementations24 Jul 2023 Chengming Hu, Yeqian Du, Rui Wang, Hao Chen

In this paper, we aim to clarify the relationships between Domain Generalization (DG) and the frequency components, and explore the spatial relationships of the phase spectrum.

Domain Generalization Out-of-Distribution Generalization +1

Teacher-Student Architecture for Knowledge Learning: A Survey

no code implementations28 Oct 2022 Chengming Hu, Xuan Li, Dan Liu, Xi Chen, Ju Wang, Xue Liu

To tackle this issue, Teacher-Student architectures were first utilized in knowledge distillation, where simple student networks can achieve comparable performance to deep teacher networks.

Knowledge Distillation Multi-Task Learning

Encoder-Decoder Architecture for Supervised Dynamic Graph Learning: A Survey

no code implementations20 Mar 2022 Yuecai Zhu, Fuyuan Lyu, Chengming Hu, Xi Chen, Xue Liu

However, the temporal information embedded in the dynamic graphs brings new challenges in analyzing and deploying them.

Graph Learning

MOBA: Multi-teacher Model Based Reinforcement Learning

no code implementations29 Sep 2021 Jikun Kang, Xi Chen, Ju Wang, Chengming Hu, Xue Liu, Gregory Dudek

Results show that, compared with SOTA model-free methods, our method can improve the data efficiency and system performance by up to 75% and 10%, respectively.

Decision Making Knowledge Distillation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.