Conditioned and Composed Image Retrieval Combining and Partially Fine-Tuning CLIP-Based Features

In this paper, we present an approach for conditioned and composed image retrieval based on CLIP features. In this extension of content-based image retrieval (CBIR), an image is combined with a text that provides information regarding user intentions and is relevant for application domains like e-commerce. The proposed method is based on an initial training stage where a simple combination of visual and textual features is used, to fine-tune the CLIP text encoder. Then in a second training stage, we learn a more complex combiner network that merges visual and textual features. Contrastive learning is used in both stages. The proposed approach obtains state-of-the-art performance for conditioned CBIR on the FashionIQ dataset and for composed CBIR on the more recent CIRR dataset.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text-Image Retrieval CIRR CLIP4Cir(v2) (Recall@5+Recall_subset@1)/2 69.09 # 1
Text-Image Retrieval Fashion IQ CLIP4Cir(v2) (Recall@10+Recall@50)/2 50.03 # 1