Keep it Consistent: Topic-Aware Storytelling from an Image Stream via Iterative Multi-agent Communication

Visual storytelling aims to generate a narrative paragraph from a sequence of images automatically. Existing approaches construct text description independently for each image and roughly concatenate them as a story, which leads to the problem of generating semantically incoherent content. In this paper, we propose a new way for visual storytelling by introducing a topic description task to detect the global semantic context of an image stream. A story is then constructed with the guidance of the topic description. In order to combine the two generation tasks, we propose a multi-agent communication framework that regards the topic description generator and the story generator as two agents and learn them simultaneously via iterative updating mechanism. We validate our approach on VIST dataset, where quantitative results, ablations, and human evaluation demonstrate our method's good ability in generating stories with higher quality compared to state-of-the-art methods.

PDF Abstract COLING 2020 PDF COLING 2020 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Storytelling VIST TAVST (RL) BLEU-1 64.2 # 7
BLEU-2 39.6 # 6
BLEU-3 23.7 # 6
BLEU-4 14.6 # 9
METEOR 35.7 # 8
CIDEr 9.2 # 14
ROUGE-L 31 # 2

Methods


No methods listed for this paper. Add relevant methods here