Image-to-Text Retrieval

28 papers with code • 8 benchmarks • 8 datasets

Image-text retrieval refers to the process of finding relevant images based on textual descriptions or retrieving textual descriptions that are relevant to a given image. It's an interdisciplinary area that blends techniques from computer vision, natural language processing (NLP), and machine learning. The aim is to bridge the semantic gap between the visual information present in images and the textual descriptions that humans use to interpret them.

Libraries

Use these libraries to find Image-to-Text Retrieval models and implementations

InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

opengvlab/internvl 21 Dec 2023

However, the progress in vision and vision-language foundation models, which are also critical elements of multi-modal AGI, has not kept pace with LLMs.

844
21 Dec 2023

Negative Pre-aware for Noisy Cross-modal Matching

zhangxu0963/npc 10 Dec 2023

Since clean samples are easier distinguished by GMM with increasing noise, the memory bank can still maintain high quality at a high noise ratio.

7
10 Dec 2023

Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval

leolee99/pau NeurIPS 2023

In this paper, we propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.

17
29 Sep 2023

Vision-Language Dataset Distillation

Guang000/Awesome-Dataset-Distillation 15 Aug 2023

In this work, we design the first vision-language dataset distillation method, building on the idea of trajectory matching.

1,164
15 Aug 2023

PRIOR: Prototype Representation Joint Learning from Medical Images and Reports

qtacierp/prior ICCV 2023

In this paper, we present a prototype representation learning framework incorporating both global and local alignment between medical images and reports.

52
24 Jul 2023

RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing

om-ai-lab/rs5m 20 Jun 2023

Moreover, we present an image-text paired dataset in the field of remote sensing (RS), RS5M, which has 5 million RS images with English descriptions.

155
20 Jun 2023

CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers

sdc17/crossget 27 May 2023

Although extensively studied for unimodal models, the acceleration for multimodal models, especially the vision-language Transformers, is relatively under-explored.

17
27 May 2023

ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities

modelscope/modelscope 18 May 2023

In this work, we explore a scalable way for building a general representation model toward unlimited modalities.

6,055
18 May 2023

Rethinking Benchmarks for Cross-modal Image-text Retrieval

cwj1412/mscoco-flikcr30k_fg 21 Apr 2023

The reason is that a large amount of images and texts in the benchmarks are coarse-grained.

20
21 Apr 2023

UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers

sdc17/upop 31 Jan 2023

Real-world data contains a vast amount of multimodal information, among which vision and language are the two most representative modalities.

83
31 Jan 2023