Exploiting Local Indexing and Deep Feature Confidence Scores for Fast Image-to-Video Search

3 Aug 2018  ·  Savas Ozkan, Gozde Bozdagi Akar ·

The cost-effective visual representation and fast query-by-example search are two challenging goals that should be maintained for web-scale visual retrieval tasks on moderate hardware. This paper introduces a fast and robust method that ensures both of these goals by obtaining state-of-the-art performance for an image-to-video search scenario. Hence, we present critical enhancements to well-known indexing and visual representation techniques by promoting faster, better and moderate retrieval performance. We also boost the superiority of our method for some visual challenges by exploiting individual decisions of local and global descriptors at query time. For instance, local content descriptors represent copied/duplicated scenes with large geometric deformations such as scale, orientation and affine transformation. In contrast, the use of global content descriptors is more practical for near-duplicate and semantic searches. Experiments are conducted on a large-scale Stanford I2V dataset. The experimental results show that our method is useful in terms of complexity and query processing time for large-scale visual retrieval scenarios, even if local and global representations are used together. The proposed method is superior and achieves state-of-the-art performance based on the mean average precision (MAP) score of this dataset. Lastly, we report additional MAP scores after updating the ground annotations unveiled by retrieval results of the proposed method, and it shows that the actual performance.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here