Learning Token-based Representation for Image Retrieval

12 Dec 2021  ·  Hui Wu, Min Wang, Wengang Zhou, Yang Hu, Houqiang Li ·

In image retrieval, deep local features learned in a data-driven manner have been demonstrated effective to improve retrieval performance. To realize efficient retrieval on large image database, some approaches quantize deep local features with a large codebook and match images with aggregated match kernel. However, the complexity of these approaches is non-trivial with large memory footprint, which limits their capability to jointly perform feature learning and aggregation. To generate compact global representations while maintaining regional matching capability, we propose a unified framework to jointly learn local feature representation and aggregation. In our framework, we first extract deep local features using CNNs. Then, we design a tokenizer module to aggregate them into a few visual tokens, each corresponding to a specific visual pattern. This helps to remove background noise, and capture more discriminative regions in the image. Next, a refinement block is introduced to enhance the visual tokens with self-attention and cross-attention. Finally, different visual tokens are concatenated to generate a compact global representation. The whole framework is trained end-to-end with image-level labels. Extensive experiments are conducted to evaluate our approach, which outperforms the state-of-the-art methods on the Revisited Oxford and Paris datasets.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Retrieval ROxford (Hard) Token mAP 66.57 # 3
Image Retrieval ROxford (Medium) Token mAP 82.28 # 2
Image Retrieval RParis (Hard) Token mAP 78.56 # 3
Image Retrieval RParis (Medium) Token mAP 89.34 # 2

Methods


No methods listed for this paper. Add relevant methods here