Visual Storytelling via Predicting Anchor Word Embeddings in the Stories

13 Jan 2020  ·  Bowen Zhang, Hexiang Hu, Fei Sha ·

We propose a learning model for the task of visual storytelling. The main idea is to predict anchor word embeddings from the images and use the embeddings and the image features jointly to generate narrative sentences. We use the embeddings of randomly sampled nouns from the groundtruth stories as the target anchor word embeddings to learn the predictor. To narrate a sequence of images, we use the predicted anchor word embeddings and the image features as the joint input to a seq2seq model. As opposed to state-of-the-art methods, the proposed model is simple in design, easy to optimize, and attains the best results in most automatic evaluation metrics. In human evaluation, the method also outperforms competing methods.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Storytelling VIST StoryAnchor: w/ Predicted Nouns BLEU-1 65.1 # 5
BLEU-2 40.0 # 5
BLEU-3 23.4 # 8
BLEU-4 14 # 15
METEOR 35.5 # 13
CIDEr 9.9 # 11
ROUGE-L 30 # 13

Methods