AXM-Net: Cross-Modal Context Sharing Attention Network for Person Re-ID

19 Jan 2021  ·  Ammarah Farooq, Muhammad Awais, Josef Kittler, Syed Safwan Khalid ·

Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align inter-modality representations according to semantic information present for a person and ignore background information. In this work, we present AXM-Net, a novel CNN based architecture designed for learning semantically aligned visual and textual representations. The underlying building block consists of multiple streams of feature maps coming from visual and textual modalities and a novel learnable context sharing semantic alignment network. We also propose complementary intra modal attention learning mechanisms to focus on more fine-grained local details in the features along with a cross-modal affinity loss for robust feature matching. Our design is unique in its ability to implicitly learn feature alignments from data. The entire AXM-Net can be trained in an end-to-end manner. We report results on both person search and cross-modal Re-ID tasks. Extensive experimentation validates the proposed framework and demonstrates its superiority by outperforming the current state-of-the-art methods by a significant margin.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text based Person Retrieval CUHK-PEDES AXM-Net R@1 61.9 # 3
R@10 85.75 # 6
R@5 79.4 # 6
mAP 57.38 # 2

Methods


No methods listed for this paper. Add relevant methods here