Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking

CVPR 2018 Filip Radenović • Ahmet Iscen • Giorgos Tolias • Yannis Avrithis • Ondřej Chum

In this paper we address issues with image retrieval benchmarking on standard and popular Oxford 5k and Paris 6k datasets. In particular, annotation errors, the size of the dataset, and the level of challenge are addressed: new annotation for both datasets is created with an extra attention to the reliability of the ground truth. An extensive comparison of the state-of-the-art methods is performed on the new benchmark.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.