Phrase Grounding
27 papers with code • 5 benchmarks • 6 datasets
Given an image and a corresponding caption, the Phrase Grounding task aims to ground each entity mentioned by a noun phrase in the caption to a region in the image.
Source: Phrase Grounding by Soft-Label Chain Conditional Random Field
Libraries
Use these libraries to find Phrase Grounding models and implementationsMost implemented papers
Grounding of Textual Phrases in Images by Reconstruction
We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly.
Revisiting Image-Language Networks for Open-ended Phrase Detection
Most existing work that grounds natural language phrases in images starts with the assumption that the phrase in question is relevant to the image.
MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding
We also investigate the utility of our model as an object detector on a given label set when fine-tuned in a few-shot setting.
Conditional Image-Text Embedding Networks
This paper presents an approach for grounding phrases in images which jointly learns multiple text-conditioned embeddings in a single end-to-end model.
Rethinking Diversified and Discriminative Proposal Generation for Visual Grounding
Visual grounding aims to localize an object in an image referred to by a textual query phrase.
Multi-level Multimodal Common Semantic Space for Image-Phrase Grounding
Following dedicated non-linear mappings for visual features at each level, word, and sentence embeddings, we obtain multiple instantiations of our common semantic space in which comparisons between any target text and the visual content is performed with cosine similarity.
Modularized Textual Grounding for Counterfactual Resilience
Computer Vision applications often require a textual grounding module with precision, interpretability, and resilience to counterfactual inputs/queries.
Zero-Shot Grounding of Objects from Natural Language Queries
A phrase grounding system localizes a particular object in an image referred to by a natural language query.
Phrase Grounding by Soft-Label Chain Conditional Random Field
In this paper, we formulate phrase grounding as a sequence labeling task where we treat candidate regions as potential labels, and use neural chain Conditional Random Fields (CRFs) to model dependencies among regions for adjacent mentions.
Learning Cross-modal Context Graph for Visual Grounding
To address their limitations, this paper proposes a language-guided graph representation to capture the global context of grounding entities and their relations, and develop a cross-modal graph matching strategy for the multiple-phrase visual grounding task.