RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search

23 May 2023  ·  Yang Bai, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie, Min Zhang ·

Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (i.e., learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (i.e., being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (i.e., learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: https://github.com/Flame-Chasers/RaSa.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text based Person Retrieval CUHK-PEDES RaSa R@1 76.51 # 2
R@10 94.25 # 1
R@5 90.29 # 2
mAP 69.38 # 1
Text based Person Retrieval ICFG-PEDES RaSa mAP 41.29 # 1
R@1 65.28 # 3
R@5 80.4 # 3
R@10 85.12 # 5
Text based Person Retrieval RSTPReid RaSa R@1 66.90 # 2
R@5 86.50 # 2
R@10 91.35 # 2
mAP 52.31 # 2

Methods