Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features

22 Aug 2023  ·  Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto del Bimbo ·

Given a query composed of a reference image and a relative caption, the Composed Image Retrieval goal is to retrieve images visually similar to the reference one that integrates the modifications expressed by the caption. Given that recent research has demonstrated the efficacy of large-scale vision and language pre-trained (VLP) models in various tasks, we rely on features from the OpenAI CLIP model to tackle the considered task. We initially perform a task-oriented fine-tuning of both CLIP encoders using the element-wise sum of visual and textual features. Then, in the second stage, we train a Combiner network that learns to combine the image-text features integrating the bimodal information and providing combined features used to perform the retrieval. We use contrastive learning in both stages of training. Starting from the bare CLIP features as a baseline, experimental results show that the task-oriented fine-tuning and the carefully crafted Combiner network are highly effective and outperform more complex state-of-the-art approaches on FashionIQ and CIRR, two popular and challenging datasets for composed image retrieval. Code and pre-trained models are available at https://github.com/ABaldrati/CLIP4Cir

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image Retrieval CIRR CLIP4Cir (v3) (Recall@5+Recall_subset@1)/2 75.10 # 6
Image Retrieval Fashion IQ CLIP4Cir (v3) (Recall@10+Recall@50)/2 55.36 # 7

Methods