Search Results for author: Mengke Li

Found 7 papers, 6 papers with code

Feature Fusion from Head to Tail for Long-Tailed Visual Recognition

1 code implementation12 Jun 2023 Mengke Li, Zhikai Hu, Yang Lu, Weichao Lan, Yiu-ming Cheung, Hui Huang

To rectify this issue, we propose to augment tail classes by grafting the diverse semantic information from head classes, referred to as head-to-tail fusion (H2T).

Joint Channel Estimation and Feedback with Masked Token Transformers in Massive MIMO Systems

no code implementations8 Jun 2023 Mingming Zhao, Lin Liu, Lifu Liu, Mengke Li, Qi Tian

To achieve joint channel estimation and feedback, this paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.


Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment

1 code implementation CVPR 2022 Mengke Li, Yiu-ming Cheung, Yang Lu

It is unfavorable for training on balanced data, but can be utilized to adjust the validity of the samples in long-tailed data, thereby solving the distorted embedding space of long-tailed problems.

Feature-Balanced Loss for Long-Tailed Visual Recognition

1 code implementation IEEE International Conference on Multimedia and Expo (ICME) 2022 Mengke Li, Yiu-ming Cheung, Juyong Jiang

Deep neural networks frequently suffer from performance degradation when the training data is long-tailed because several majority classes dominate the training, resulting in a biased model.

Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition

1 code implementation18 May 2023 Mengke Li, Yiu-ming Cheung, Yang Lu, Zhikai Hu, Weichao Lan, Hui Huang

Based on these perturbed features, two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.

Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation

1 code implementation CVPR 2023 Yan Jin, Mengke Li, Yang Lu, Yiu-ming Cheung, Hanzi Wang

To address this problem, state-of-the-art methods usually adopt a mixture of experts (MoE) to focus on different parts of the long-tailed distribution.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.