Discover Cross-Modality Nuances for Visible-Infrared Person Re-Identification

Visible-infrared person re-identification (Re-ID) aims to match the pedestrian images of the same identity from different modalities. Existing works mainly focus on alleviating the modality discrepancy by aligning the distributions of features from different modalities. However, nuanced but discriminative information, such as glasses, shoes, and the length of clothes, has not been fully explored, especially in the infrared modality. Without discovering nuances, it is challenging to match pedestrians across modalities using modality alignment solely, which inevitably reduces feature distinctiveness. In this paper, we propose a joint Modality and Pattern Alignment Network (MPANet) to discover cross-modality nuances in different patterns for visible-infrared person Re-ID, which introduces a modality alleviation module and a pattern alignment module to jointly extract discriminative features. Specifically, we first propose a modality alleviation module to dislodge the modality information from the extracted feature maps. Then, We devise a pattern alignment module, which generates multiple pattern maps for the diverse patterns of a person, to discover nuances. Finally, we introduce a mutual mean learning fashion to alleviate the modality discrepancy and propose a center cluster loss to guide both identity learning and nuances discovering. Extensive experiments on the public SYSU-MM01 and RegDB datasets demonstrate the superiority of MPANet over state-of-the-arts.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here