Scalable Person Re-Identification: A Benchmark

This paper contributes a new high quality dataset for person re-identification, named "Market-1501". Generally, current datasets: 1) are limited in scale; 2) consist of hand-drawn bboxes, which are unavailable under realistic settings; 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.

PDF Abstract

Datasets


Introduced in the Paper:

Market-1501

Used in the Paper:

CUHK03 DukeMTMC-reID VIPeR CUHK02
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Person Re-Identification DukeMTMC-reID BOW Rank-1 25.13 # 82
mAP 12.17 # 86
Person Re-Identification Market-1501 BOW Rank-1 34.40 # 102
mAP 14.09 # 106

Methods


No methods listed for this paper. Add relevant methods here