DiP: Learning Discriminative Implicit Parts for Person Re-Identification

24 Dec 2022  ·  Dengjie Li, Siyu Chen, Yujie Zhong, Lin Ma ·

In person re-identification (ReID) tasks, many works explore the learning of part features to improve the performance over global image features. Existing methods explicitly extract part features by either using a hand-designed image division or keypoints obtained with external visual systems. In this work, we propose to learn Discriminative implicit Parts (DiPs) which are decoupled from explicit body parts. Therefore, DiPs can learn to extract any discriminative features that can benefit in distinguishing identities, which is beyond predefined body parts (such as accessories). Moreover, we propose a novel implicit position to give a geometric interpretation for each DiP. The implicit position can also serve as a learning signal to encourage DiPs to be more position-equivariant with the identity in the image. Lastly, an additional DiP weighting is introduced to handle the invisible or occluded situation and further improve the feature representation of DiPs. Extensive experiments show that the proposed method achieves state-of-the-art performance on multiple person ReID benchmarks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Person Re-Identification CUHK03 detected DiP (without RK) MAP 83.1 # 2
Rank-1 85.4 # 2
Person Re-Identification CUHK03 labeled DiP (without RK) MAP 85.7 # 3
Rank-1 87 # 3
Person Re-Identification DukeMTMC-reID DiP (without RK) Rank-1 91.7 # 14
mAP 85.2 # 21
Person Re-Identification Market-1501 DiP (without RK) Rank-1 95.8 # 29
mAP 90.8 # 31
Person Re-Identification MSMT17 DiP (without RK) Rank-1 87.3 # 11
mAP 71.8 # 11
Person Re-Identification Occluded-DukeMTMC DiP (without RK) Rank-1 71.1 # 2
mAP 63.1 # 1
Rank-1 71.1 # 1

Methods


No methods listed for this paper. Add relevant methods here