Image retrieval systems aim to find similar images to a query image among an image dataset.
( Image credit: DELF )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
GLDv2 is the largest such dataset to date by a large margin, including over 5M images and 200k distinct instance labels.
In this work, our key contribution is to unify global and local features into a single deep model, enabling accurate retrieval with efficient feature extraction.
Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods.
In particular, annotation errors, the size of the dataset, and the level of challenge are addressed: new annotation for both datasets is created with an extra attention to the reliability of the ground truth.
We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELF (DEep Local Feature).
#2 best model for Image Retrieval on Oxf5k
The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimize the label noise.
#2 best model for Face Verification on IJB-C
Deep metric learning aims to learn a function mapping image pixels to embedding feature vectors that model the similarity between images.
#3 best model for Image Retrieval on CARS196
The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels.
We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval.