|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation.
In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner.
Most of the proposed person re-identification algorithms conduct supervised training and testing on single labeled datasets with small size, so directly deploying these trained models to a large-scale real-world camera network may lead to poor performance due to underfitting.
To overcome this problem, we propose a deep model for the soft multilabel learning for unsupervised RE-ID.
Ranked #60 on Person Re-Identification on DukeMTMC-reID
Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence.
Upon our SSG, we further introduce a clustering-guided semisupervised approach named SSG ++ to conduct the one-shot domain adaption in an open set setting (i. e. the number of independent identities from the target domain is unknown).
Specifically, we develop a PatchNet to select patches from the feature map and learn discriminative features for these patches.
In this work, to address the video person re-id task, we formulate a novel Deep Association Learning (DAL) scheme, the first end-to-end deep learning method using none of the identity labels in model initialisation and training.
Ranked #4 on Person Re-Identification on PRID2011
We evaluate our model on unsupervised person re-identification and pose-invariant face recognition.