Person re-identification is the task of associating images of the same person taken from different cameras or from the same camera in different occasions.
( Image credit: PRID2011 dataset )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
When autonomous systems consider the performance of accuracy and transferability simultaneously, several AI methods, like adversarial learning, reinforcement learning (RL) and meta-learning, show their powerful performance.
In this paper, we propose an attentive feature aggregation module, namely Multi-Granularity Reference-aided Attentive Feature Aggregation (MG-RAFA), to delicately aggregate spatio-temporal features into a discriminative video-level feature representation.
In this paper, we introduce a novel gating mechanism to deep neural networks.
Solving Single-Shot Person Re-Identification (Re-Id) by training Deep Convolutional Neural Networks is a daunting challenge, due to the lack of training data, since only two images per person are available.
When aligning two groups of local features from two images, we view it as a graph matching problem and propose a cross-graph embedded-alignment (CGEA) layer to jointly learn and embed topology information to local features, and straightly predict similarity score.
MPN has three key advantages: 1) it does not need to conduct body part detection in the inference stage; 2) its model is very compact and efficient for both training and testing; 3) in the training stage, it requires only coarse priors of body part locations, which are easy to obtain.
This work considers the problem of domain shift in person re-identification. Being trained on one dataset, a re-identification model usually performs much worse on unseen data.
To tackle the re-ID problem in the context of clothing changes, we propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns.
The structured domain-translation network can effectively transform the source-domain images into the target domain while well preserving the original intra- and inter-identity relations.
We find that changing clothes makes Reid a much harder problem in the sense of bringing difficulties to learning effective representations and also challenges the generalization ability of previous Reid models to identify persons with unseen (new) clothes.