Composing Text and Image for Image Retrieval - An Empirical Odyssey

In this paper, we study the task of image retrieval, where the input query is specified in the form of an image plus some text that describes desired modifications to the input image. For example, we may present an image of the Eiffel tower, and ask the system to find images which are visually similar but are modified in small ways, such as being taken at nighttime instead of during the day. To tackle this task, we learn a similarity metric between a target image and a source image plus source text, an embedding and composing function such that target image feature is close to the source image plus text composition feature. We propose a new way to combine image and text using such function that is designed for the retrieval task. We show this outperforms existing approaches on 3 different datasets, namely Fashion-200k, MIT-States and a new synthetic dataset we create based on CLEVR. We also show that our approach can be used to classify input queries, in addition to image retrieval.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Retrieval with Multi-Modal Query Fashion200k TIRG Recall@1 14.1 # 4
Recall@10 42.5 # 4
Recall@50 63.8 # 4
Image Retrieval with Multi-Modal Query FashionIQ TIRG Recall@10 3.34 # 2
Image Retrieval with Multi-Modal Query MIT-States TIRG Recall@1 12.2 # 2
Recall@5 31.9 # 2
Recall@10 43.1 # 2


No methods listed for this paper. Add relevant methods here