An Empirical Study of CLIP for Text-based Person Search

19 Aug 2023  ·  Min Cao, Yang Bai, Ziyin Zeng, Mang Ye, Min Zhang ·

Text-based Person Search (TBPS) aims to retrieve the person images using natural language descriptions. Recently, Contrastive Language Image Pretraining (CLIP), a universal large cross-modal vision-language pre-training model, has remarkably performed over various cross-modal downstream tasks due to its powerful cross-modal semantic learning capacity. TPBS, as a fine-grained cross-modal retrieval task, is also facing the rise of research on the CLIP-based TBPS. In order to explore the potential of the visual-language pre-training model for downstream TBPS tasks, this paper makes the first attempt to conduct a comprehensive empirical study of CLIP for TBPS and thus contribute a straightforward, incremental, yet strong TBPS-CLIP baseline to the TBPS community. We revisit critical design considerations under CLIP, including data augmentation and loss function. The model, with the aforementioned designs and practical training tricks, can attain satisfactory performance without any sophisticated modules. Also, we conduct the probing experiments of TBPS-CLIP in model generalization and model compression, demonstrating the effectiveness of TBPS-CLIP from various aspects. This work is expected to provide empirical insights and highlight future CLIP-based TBPS research.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text based Person Retrieval CUHK-PEDES TBPS-CLIP (ViT-B/16) R@1 73.54 # 4
R@10 92.35 # 5
R@5 88.19 # 5
mAP 65.38 # 7
Text based Person Retrieval ICFG-PEDES TBPS-CLIP (ViT-B/16) mAP 39.83 # 5
R@1 65.05 # 4
R@5 80.34 # 4
R@10 85.47 # 4
Text based Person Retrieval RSTPReid TBPS-CLIP (ViT-B/16) R@1 61.95 # 4
R@5 83.55 # 5
R@10 88.75 # 4
mAP 48.26 # 4

Methods