SentiStory: A Multi-Layered Sentiment-Aware Generative Model for Visual Storytelling

The visual storytelling (VIST) task aims at generating reasonable, human-like and coherent stories with the image streams as input. Although many deep learning models have achieved promising results, most of them do not directly leverage the sentiment information of stories. In this paper, we propose a sentiment-aware generative model for VIST called SentiStory. The key of SentiStory is a multi-layered sentiment extraction module (MLSEM). For a given image stream, the higher layer gives coarse-grained but accurate sentiments, while the lower layer of the MLSEM extracts fine-grained but usually unreliable ones. The two layers are combined strategically to generate coherent and rich visual sentiment concepts for the VIST task. Results from both automatic and human evaluations demonstrate that with the help of the MLSEM, SentiStory achieves improvement in generating more coherent and human-like stories.

PDF
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Visual Storytelling VIST SentiStory BLEU-1 65.5 # 4
BLEU-2 40.7 # 4
BLEU-3 24.1 # 3
BLEU-4 14.8 # 5
METEOR 35.7 # 8
CIDEr 10.1 # 9
ROUGE-L 30.2 # 7

Methods


No methods listed for this paper. Add relevant methods here