A Hierarchical Approach for Generating Descriptive Image Paragraphs

Recent progress on image captioning has made it possible to generate novel sentences describing images in natural language, but compressing an image into a single sentence can describe visual content in only coarse detail. While one new captioning approach, dense captioning, can potentially describe images in finer levels of detail by captioning many regions within an image, it in turn is unable to produce a coherent story for an image. In this paper we overcome these limitations by generating entire paragraphs for describing images, which can tell detailed, unified stories. We develop a model that decomposes both images and paragraphs into their constituent parts, detecting semantic regions in images and using a hierarchical recurrent neural network to reason about language. Linguistic analysis confirms the complexity of the paragraph generation task, and thorough experiments on a new dataset of image and paragraph pairs demonstrate the effectiveness of our approach.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Paragraph Captioning Image Paragraph Captioning Regions-Hierarchical (ours) BLEU-4 8.69 # 7
METEOR 15.95 # 7
CIDEr 13.52 # 10


No methods listed for this paper. Add relevant methods here