Person Re-Identification
510 papers with code • 34 benchmarks • 57 datasets
Person Re-Identification is a computer vision task in which the goal is to match a person's identity across different cameras or locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, body shape, and clothing to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.
Libraries
Use these libraries to find Person Re-Identification models and implementationsSubtasks
- Unsupervised Person Re-Identification
- Video-Based Person Re-Identification
- Generalizable Person Re-identification
- Cloth-Changing Person Re-Identification
- Cloth-Changing Person Re-Identification
- Large-Scale Person Re-Identification
- Cross-Modal Person Re-Identification
- Self-Supervised Person Re-Identification
- Clothes Changing Person Re-Identification
- Image-To-Video Person Re-Identification
- Semi-Supervised Person Re-Identification
- Direct Transfer Person Re-identification
- Federated Lifelong Person ReID
Latest papers with no code
Spectrum-guided Feature Enhancement Network for Event Person Re-Identification
This network consists of two innovative components: the Multi-grain Spectrum Attention Mechanism (MSAM) and the Consecutive Patch Dropout Module (CPDM).
Exploring Homogeneous and Heterogeneous Consistent Label Associations for Unsupervised Visible-Infrared Person ReID
In response, we introduce a Modality-Unified Label Transfer (MULT) module that simultaneously accounts for both homogeneous and heterogeneous fine-grained instance-level structures, yielding high-quality cross-modality label associations.
MLLMReID: Multimodal Large Language Model-based Person Re-identification
This paper will investigate how to adapt them for the task of ReID.
Cross-Modality Perturbation Synergy Attack for Person Re-identification
For instance, infrared images are typically grayscale, unlike visible images that contain color information.
A Deep Hierarchical Feature Sparse Framework for Occluded Person Re-Identification
Most existing methods tackle the problem of occluded person re-identification (ReID) by utilizing auxiliary models, resulting in a complicated and inefficient ReID framework that is unacceptable for real-time applications.
Multi-Memory Matching for Unsupervised Visible-Infrared Person Re-Identification
To associate cross-modality clustered pseudo-labels, we design a Multi-Memory Learning and Matching (MMLM) module, ensuring that optimization explicitly focuses on the nuances of individual perspectives and establishes reliable cross-modality correspondences.
CLIP-Driven Semantic Discovery Network for Visible-Infrared Person Re-Identification
Additionally, acknowledging the complementary nature of semantic details across different modalities, we integrate text features from the bimodal language descriptions to achieve comprehensive semantics.
Prompt Decoupling for Text-to-Image Person Re-identification
In the first stage, we freeze the two encoders from CLIP and solely focus on optimizing the prompts to alleviate domain gap between the original training data of CLIP and downstream tasks.
Frequency Domain Nuances Mining for Visible-Infrared Person Re-identification
Specifically, we propose a novel Frequency Domain Nuances Mining (FDNM) method to explore the cross-modality frequency domain information, which mainly includes an amplitude guided phase (AGP) module and an amplitude nuances mining (ANM) module.
Frequency Domain Modality-invariant Feature Learning for Visible-infrared Person Re-Identification
Visible-infrared person re-identification (VI-ReID) is challenging due to the significant cross-modality discrepancies between visible and infrared images.