Generalizable Person Re-identification
21 papers with code • 4 benchmarks • 9 datasets
Generalizable person re-identification refers to methods trained on a source dataset but directly evaluated on a target dataset without domain adaptation or transfer learning.
Most implemented papers
Benchmarks for Corruption Invariant Person Re-identification
When deploying person re-identification (ReID) model in safety-critical applications, it is pivotal to understanding the robustness of the model against a diverse array of image corruptions.
Calibrated Feature Decomposition for Generalizable Person Re-Identification
The calibrated person representation is subtly decomposed into the identity-relevant feature, domain feature, and the remaining entangled one.
Mimic Embedding via Adaptive Aggregation: Learning Generalizable Person Re-identification
Meanwhile, META considers the relevance of an unseen target sample and source domains via normalization statistics and develops an aggregation module to adaptively integrate multiple experts for mimicking unseen target domain.
Meta Distribution Alignment for Generalizable Person Re-Identification
Domain Generalizable (DG) person ReID is a challenging task which trains a model on source domains yet generalizes well on target domains.
Style Interleaved Learning for Generalizable Person Re-identification
This common practice causes the model to overfit to existing feature styles in the source domain, resulting in sub-optimal generalization ability on target domains.
Style Variable and Irrelevant Learning for Generalizable Person Re-identification
In this paper, we first verify through an experiment that style factors are a vital part of domain bias.
Is Synthetic Dataset Reliable for Benchmarking Generalizable Person Re-Identification?
Through the designed pairwise ranking analysis and comprehensive evaluations, we conclude that a recent large-scale synthetic dataset ClonedPerson can be reliably used to benchmark GPReID, statistically the same as real-world datasets.
Deep Multimodal Fusion for Generalizable Person Re-identification
Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance.
Learning Robust Visual-Semantic Embedding for Generalizable Person Re-identification
In this paper, we propose a Multi-Modal Equivalent Transformer called MMET for more robust visual-semantic embedding learning on visual, textual and visual-textual tasks respectively.
Part-Aware Transformer for Generalizable Person Re-identification
Based on the local similarity obtained in CSL, a Part-guided Self-Distillation (PSD) is proposed to further improve the generalization of global features.