Deep Visual-Semantic Alignments for Generating Image Descriptions

CVPR 2015  ·  Andrej Karpathy, Li Fei-Fei ·

We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.

PDF Abstract CVPR 2015 PDF CVPR 2015 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Modal Retrieval COCO 2014 Dual-Path (ResNet) Image-to-text R@1 41.2 # 30
Image-to-text R@10 81.1 # 28
Image-to-text R@5 70.5 # 29
Text-to-image R@1 25.3 # 33
Text-to-image R@10 66.4 # 30
Text-to-image R@5 53.4 # 31
Question Generation COCO Visual Question Answering (VQA) real images 1.0 open ended coco-Caption [[Karpathy and Li2014]] BLEU-1 62.5 # 2
Image Retrieval Flickr30K 1K test DVSA (R-CNN, AlexNet) R@1 15.2 # 18
R@10 50.5 # 17
Image Captioning Flickr30k Captions test BRNN BLEU-4 15.7 # 3
CIDEr 24.7 # 6
METEOR 15.3 # 3
SPICE - # 6
Image-to-Text Retrieval MS COCO DVSA Recall@10 74.8 # 7

Methods


No methods listed for this paper. Add relevant methods here