no code implementations • 5 Oct 2023 • Jie-Jing Shao, Jiang-Xin Shi, Xiao-Wen Yang, Lan-Zhe Guo, Yu-Feng Li
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks.
1 code implementation • 18 Sep 2023 • Jiang-Xin Shi, Tong Wei, Zhi Zhou, Xin-Yan Han, Jie-Jing Shao, Yu-Feng Li
In this paper, we propose PEL, a fine-tuning method that can effectively adapt pre-trained models to long-tailed recognition tasks in fewer than 20 epochs without the need for extra data.
Ranked #1 on Long-tail Learning on CIFAR-100-LT (ρ=10) (using extra training data)
Fine-Grained Image Classification Long-tail learning with class descriptors
4 code implementations • 8 Oct 2022 • Tong Wei, Zhen Mao, Jiang-Xin Shi, Yu-Feng Li, Min-Ling Zhang
Multi-label learning has attracted significant attention from both academic and industry field in recent decades.
no code implementations • 26 May 2022 • Tong Wei, Qian-Yu Liu, Jiang-Xin Shi, Wei-Wei Tu, Lan-Zhe Guo
TRAS transforms the imbalanced pseudo-label distribution of a traditional SSL model via a delicate function to enhance the supervisory signals for minority classes.
no code implementations • 22 Oct 2021 • Tong Wei, Jiang-Xin Shi, Yu-Feng Li, Min-Ling Zhang
Deep neural networks have been shown to be very powerful methods for many supervised learning tasks.
no code implementations • 26 Aug 2021 • Tong Wei, Jiang-Xin Shi, Wei-Wei Tu, Yu-Feng Li
To overcome this limitation, we establish a new prototypical noise detection method by designing a distance-based metric that is resistant to label noise.
Ranked #25 on Image Classification on mini WebVision 1.0