Deep Visual-Semantic Alignments for Generating Image Descriptions

We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data... (read more)

PDF Abstract CVPR 2015 PDF CVPR 2015 Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Text-Image Retrieval COCO DVSA Recall@10 80.5 # 1
Cross-Modal Retrieval COCO 2014 Dual-Path (ResNet) Image-to-text R@1 41.2 # 8
Image-to-text R@10 81.1 # 8
Image-to-text R@5 70.5 # 8
Text-to-image R@1 25.3 # 9
Text-to-image R@10 66.4 # 9
Text-to-image R@5 53.4 # 9
Text-Image Retrieval COCO (image as query) DVSA Recall@10 74.8 # 3
Question Generation COCO Visual Question Answering (VQA) real images 1.0 open ended coco-Caption [[Karpathy and Li2014]] BLEU-1 62.5 # 2
Image Retrieval Flickr30K 1K test DVSA (R-CNN, AlexNet) R@1 15.2 # 15
R@10 50.5 # 14

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet