Multi-modal image retrieval with random walk on multi-layer graphs

12 Jul 2016  ·  Renata Khasanova, Xiaowen Dong, Pascal Frossard ·

The analysis of large collections of image data is still a challenging problem due to the difficulty of capturing the true concepts in visual data. The similarity between images could be computed using different and possibly multimodal features such as color or edge information or even text labels. This motivates the design of image analysis solutions that are able to effectively integrate the multi-view information provided by different feature sets. We therefore propose a new image retrieval solution that is able to sort images through a random walk on a multi-layer graph, where each layer corresponds to a different type of information about the image data. We study in depth the design of the image graph and propose in particular an effective method to select the edge weights for the multi-layer graph, such that the image ranking scores are optimised. We then provide extensive experiments in different real-world photo collections, which confirm the high performance of our new image retrieval algorithm that generally surpasses state-of-the-art solutions due to a more meaningful image similarity computation.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here