Search Results for author: Mengke Li

Found 8 papers, 6 papers with code

Improve Knowledge Distillation via Label Revision and Data Selection

no code implementations3 Apr 2024 Weichao Lan, Yiu-ming Cheung, Qing Xu, Buhua Liu, Zhikai Hu, Mengke Li, Zhenghua Chen

In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model.

Knowledge Distillation Model Compression

Feature Fusion from Head to Tail for Long-Tailed Visual Recognition

1 code implementation12 Jun 2023 Mengke Li, Zhikai Hu, Yang Lu, Weichao Lan, Yiu-ming Cheung, Hui Huang

To rectify this issue, we propose to augment tail classes by grafting the diverse semantic information from head classes, referred to as head-to-tail fusion (H2T).

Joint Channel Estimation and Feedback with Masked Token Transformers in Massive MIMO Systems

no code implementations8 Jun 2023 Mingming Zhao, Lin Liu, Lifu Liu, Mengke Li, Qi Tian

To achieve joint channel estimation and feedback, this paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.

Denoising

Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment

1 code implementation CVPR 2022 Mengke Li, Yiu-ming Cheung, Yang Lu

It is unfavorable for training on balanced data, but can be utilized to adjust the validity of the samples in long-tailed data, thereby solving the distorted embedding space of long-tailed problems.

Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition

1 code implementation18 May 2023 Mengke Li, Yiu-ming Cheung, Yang Lu, Zhikai Hu, Weichao Lan, Hui Huang

Based on these perturbed features, two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.

Feature-Balanced Loss for Long-Tailed Visual Recognition

1 code implementation IEEE International Conference on Multimedia and Expo (ICME) 2022 Mengke Li, Yiu-ming Cheung, Juyong Jiang

Deep neural networks frequently suffer from performance degradation when the training data is long-tailed because several majority classes dominate the training, resulting in a biased model.

Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation

1 code implementation CVPR 2023 Yan Jin, Mengke Li, Yang Lu, Yiu-ming Cheung, Hanzi Wang

To address this problem, state-of-the-art methods usually adopt a mixture of experts (MoE) to focus on different parts of the long-tailed distribution.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.