LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition

8 May 2023  ·  Peng Xia, Di Xu, Lie Ju, Ming Hu, Jun Chen, ZongYuan Ge ·

Long-tailed multi-label visual recognition (LTML) task is a highly challenging task due to the label co-occurrence and imbalanced data distribution. In this work, we propose a unified framework for LTML, namely prompt tuning with class-specific embedding loss (LMPT), capturing the semantic feature interactions between categories by combining text and image modality data and improving the performance synchronously on both head and tail classes. Specifically, LMPT introduces the embedding loss function with class-aware soft margin and re-weighting to learn class-specific contexts with the benefit of textual descriptions (captions), which could help establish semantic relationships between classes, especially between the head and tail classes. Furthermore, taking into account the class imbalance, the distribution-balanced loss is adopted as the classification loss function to further improve the performance on the tail classes without compromising head classes. Extensive experiments are conducted on VOC-LT and COCO-LT datasets, which demonstrates that the proposed method significantly surpasses the previous state-of-the-art methods and zero-shot CLIP in LTML. Our codes are fully available at \url{https://github.com/richard-peng-xia/LMPT}.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Long-tail Learning on COCO-MLT (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Long-tail Learning COCO-MLT LMPT(ViT-B/16) Average mAP 66.19 # 1
Long-tail Learning COCO-MLT LMPT(ResNet-50) Average mAP 58.97 # 3
Long-tail Learning VOC-MLT LMPT(ResNet-50) Average mAP 85.44 # 3
Long-tail Learning VOC-MLT LMPT(ViT-B/16) Average mAP 87.88 # 1

Methods