Paper

Exploiting Local Indexing and Deep Feature Confidence Scores for Fast Image-to-Video Search

The cost-effective visual representation and fast query-by-example search are two challenging goals that should be maintained for web-scale visual retrieval tasks on moderate hardware. This paper introduces a fast and robust method that ensures both of these goals by obtaining state-of-the-art performance for an image-to-video search scenario... Hence, we present critical enhancements to well-known indexing and visual representation techniques by promoting faster, better and moderate retrieval performance. We also boost the superiority of our method for some visual challenges by exploiting individual decisions of local and global descriptors at query time. For instance, local content descriptors represent copied/duplicated scenes with large geometric deformations such as scale, orientation and affine transformation. In contrast, the use of global content descriptors is more practical for near-duplicate and semantic searches. Experiments are conducted on a large-scale Stanford I2V dataset. The experimental results show that our method is useful in terms of complexity and query processing time for large-scale visual retrieval scenarios, even if local and global representations are used together. The proposed method is superior and achieves state-of-the-art performance based on the mean average precision (MAP) score of this dataset. Lastly, we report additional MAP scores after updating the ground annotations unveiled by retrieval results of the proposed method, and it shows that the actual performance. read more

Results in Papers With Code
(↓ scroll down to see all results)