Cross-Modality Person Re-identification
6 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Cross-Modality Person Re-identification
Most implemented papers
Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification
The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.
RGB-Infrared Cross-Modality Person Re-Identification via Joint Pixel and Feature Alignment
First, it can exploit pixel alignment and feature alignment jointly.
Parameter Sharing Exploration and Hetero-Center based Triplet Loss for Visible-Thermal Person Re-Identification
By well splitting the ResNet50 model to construct the modality-specific feature extracting network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameters sharing of two-stream network for VT Re-ID.
Leaning Compact and Representative Features for Cross-Modality Person Re-Identification
This paper pays close attention to the cross-modality visible-infrared person re-identification (VI Re-ID) task, which aims to match pedestrian samples between visible and infrared modes.
Self-Supervised Modality-Aware Multiple Granularity Pre-Training for RGB-Infrared Person Re-Identification
Much of that is due to the notorious modality bias training issue brought by the single-modality ImageNet pre-training, which might yield RGB-biased representations that severely hinder the cross-modality image retrieval.
Bridging the Gap: Multi-Level Cross-Modality Joint Alignment for Visible-Infrared Person Re-Identification
Visible-Infrared person Re-IDentification (VI-ReID) is a challenging cross-modality image retrieval task that aims to match pedestrians' images across visible and infrared cameras.