Person Re-Identification

510 papers with code • 34 benchmarks • 57 datasets

Person Re-Identification is a computer vision task in which the goal is to match a person's identity across different cameras or locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, body shape, and clothing to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.

Libraries

Use these libraries to find Person Re-Identification models and implementations

Latest papers with no code

Spectrum-guided Feature Enhancement Network for Event Person Re-Identification

no code yet • 2 Feb 2024

This network consists of two innovative components: the Multi-grain Spectrum Attention Mechanism (MSAM) and the Consecutive Patch Dropout Module (CPDM).

Exploring Homogeneous and Heterogeneous Consistent Label Associations for Unsupervised Visible-Infrared Person ReID

no code yet • 1 Feb 2024

In response, we introduce a Modality-Unified Label Transfer (MULT) module that simultaneously accounts for both homogeneous and heterogeneous fine-grained instance-level structures, yielding high-quality cross-modality label associations.

MLLMReID: Multimodal Large Language Model-based Person Re-identification

no code yet • 24 Jan 2024

This paper will investigate how to adapt them for the task of ReID.

Cross-Modality Perturbation Synergy Attack for Person Re-identification

no code yet • 18 Jan 2024

For instance, infrared images are typically grayscale, unlike visible images that contain color information.

A Deep Hierarchical Feature Sparse Framework for Occluded Person Re-Identification

no code yet • 15 Jan 2024

Most existing methods tackle the problem of occluded person re-identification (ReID) by utilizing auxiliary models, resulting in a complicated and inefficient ReID framework that is unacceptable for real-time applications.

Multi-Memory Matching for Unsupervised Visible-Infrared Person Re-Identification

no code yet • 12 Jan 2024

To associate cross-modality clustered pseudo-labels, we design a Multi-Memory Learning and Matching (MMLM) module, ensuring that optimization explicitly focuses on the nuances of individual perspectives and establishes reliable cross-modality correspondences.

CLIP-Driven Semantic Discovery Network for Visible-Infrared Person Re-Identification

no code yet • 11 Jan 2024

Additionally, acknowledging the complementary nature of semantic details across different modalities, we integrate text features from the bimodal language descriptions to achieve comprehensive semantics.

Prompt Decoupling for Text-to-Image Person Re-identification

no code yet • 4 Jan 2024

In the first stage, we freeze the two encoders from CLIP and solely focus on optimizing the prompts to alleviate domain gap between the original training data of CLIP and downstream tasks.

Frequency Domain Nuances Mining for Visible-Infrared Person Re-identification

no code yet • 4 Jan 2024

Specifically, we propose a novel Frequency Domain Nuances Mining (FDNM) method to explore the cross-modality frequency domain information, which mainly includes an amplitude guided phase (AGP) module and an amplitude nuances mining (ANM) module.

Frequency Domain Modality-invariant Feature Learning for Visible-infrared Person Re-Identification

no code yet • 3 Jan 2024

Visible-infrared person re-identification (VI-ReID) is challenging due to the significant cross-modality discrepancies between visible and infrared images.