no code implementations • 14 Mar 2025 • Chen Shu, Mengke Li, Yiqun Zhang, Yang Lu, Bo Han, Yiu-ming Cheung, Hanzi Wang
T2H noise severely degrades model performance by polluting the head classes and forcing the model to learn the tail samples as head.
1 code implementation • 10 Mar 2025 • Chikai Shang, Mengke Li, Yiqun Zhang, Zhen Chen, Jinlin Wu, Fangqing Gu, Yang Lu, Yiu-ming Cheung
Specifically, we develop a prompt relocation strategy for ADO derived from this formulation, comprising two optimization steps: identifying and pruning idle prompts, followed by determining the optimal blocks for their relocation.
1 code implementation • 29 Dec 2024 • Yunfan Zhang, Yiqun Zhang, Yang Lu, Mengke Li, Xi Chen, Yiu-ming Cheung
However, some tricky but common FC problems are still relatively unexplored, including the heterogeneity in terms of clients' communication capacity and the unknown number of proper clusters $k^*$.
no code implementations • 22 Nov 2024 • Junyang Chen, Yiqun Zhang, Mengke Li, Yang Lu, Yiu-ming Cheung
Clustering complex data in the form of attributed graphs has attracted increasing attention, where appropriate graph representation is a critical prerequisite for accurate cluster analysis.
1 code implementation • 28 Oct 2024 • Mengke Li, Ye Liu, Yang Lu, Yiqun Zhang, Yiu-ming Cheung, Hui Huang
To address this issue, we propose a novel method called Random SAM prompt tuning (RSAM-PT) to improve the model generalization, requiring only one-step gradient computation at each step.
no code implementations • 4 Aug 2024 • Fengling Lv, Xinyi Shang, Yang Zhou, Yiqun Zhang, Mengke Li, Yang Lu
Additionally, due to the diverse environments in which each client operates, data heterogeneity is also a classic challenge in federated learning.
no code implementations • 18 Jul 2024 • Mengke Li, Da Li, Guoqing Yang, Yiu-ming Cheung, Hui Huang
Accordingly, we propose the Adaptive PointFormer (APF), which fine-tunes pre-trained 2D models with only a modest number of parameters to directly process point clouds, obviating the need for mapping to images.
no code implementations • 29 Apr 2024 • Liyuan Wang, Yan Jin, Zhen Chen, Jinlin Wu, Mengke Li, Yang Lu, Hanzi Wang
The vision-language pre-training has enabled deep models to make a huge step forward in generalizing across unseen domains.
1 code implementation • 23 Apr 2024 • Chenxing Hong, Yan Jin, Zhiqi Kang, Yizhou Chen, Mengke Li, Yang Lu, Hanzi Wang
We find that imbalanced tasks significantly challenge the capability of models to control the trade-off between stability and plasticity from the perspective of recent prompt-based continual learning methods.
no code implementations • 3 Apr 2024 • Weichao Lan, Yiu-ming Cheung, Qing Xu, Buhua Liu, Zhikai Hu, Mengke Li, Zhenghua Chen
In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model.
1 code implementation • 12 Jun 2023 • Mengke Li, Zhikai Hu, Yang Lu, Weichao Lan, Yiu-ming Cheung, Hui Huang
To rectify this issue, we propose to augment tail classes by grafting the diverse semantic information from head classes, referred to as head-to-tail fusion (H2T).
no code implementations • 8 Jun 2023 • Mingming Zhao, Lin Liu, Lifu Liu, Mengke Li, Qi Tian
To achieve joint channel estimation and feedback, this paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
1 code implementation • CVPR 2022 • Mengke Li, Yiu-ming Cheung, Yang Lu
It is unfavorable for training on balanced data, but can be utilized to adjust the validity of the samples in long-tailed data, thereby solving the distorted embedding space of long-tailed problems.
Ranked #15 on
Long-tail Learning
on CIFAR-10-LT (ρ=100)
1 code implementation • IEEE International Conference on Multimedia and Expo (ICME) 2022 • Mengke Li, Yiu-ming Cheung, Juyong Jiang
Deep neural networks frequently suffer from performance degradation when the training data is long-tailed because several majority classes dominate the training, resulting in a biased model.
Ranked #16 on
Long-tail Learning
on CIFAR-10-LT (ρ=100)
1 code implementation • 18 May 2023 • Mengke Li, Yiu-ming Cheung, Yang Lu, Zhikai Hu, Weichao Lan, Hui Huang
That is, the embedding space of head classes severely compresses that of tail classes, which is not conducive to subsequent classifier learning.
1 code implementation • CVPR 2023 • Yan Jin, Mengke Li, Yang Lu, Yiu-ming Cheung, Hanzi Wang
To address this problem, state-of-the-art methods usually adopt a mixture of experts (MoE) to focus on different parts of the long-tailed distribution.
1 code implementation • Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) 2021 • Mingjie Li, Wenjia Cai, Rui Liu, Yuetian Weng, Xiaoyun Zhao, Cong Wang, Xin Chen, Zhong Liu, Caineng Pan, Mengke Li, Yizhi Liu, Flora D Salim, Karin Verspoor, Xiaodan Liang, Xiaojun Chang
Researchers have explored advanced methods from computer vision and natural language processing to incorporate medical domain knowledge for the generation of readable medical reports.