Contextualize, Show and Tell: A Neural Visual Storyteller

3 Jun 2018  ·  Diana Gonzalez-Rico, Gibran Fuentes-Pineda ·

We present a neural model for generating short stories from image sequences, which extends the image description model by Vinyals et al. (Vinyals et al., 2015). This extension relies on an encoder LSTM to compute a context vector of each story from the image sequence. This context vector is used as the first state of multiple independent decoder LSTMs, each of which generates the portion of the story corresponding to each image in the sequence by taking the image embedding as the first input. Our model showed competitive results with the METEOR metric and human ratings in the internal track of the Visual Storytelling Challenge 2018.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Storytelling VIST CST BLEU-1 60.1 # 14
BLEU-2 36.5 # 12
BLEU-3 21.1 # 14
BLEU-4 12.7 # 22
METEOR 34.4 # 25
CIDEr 5.1 # 28
ROUGE-L 29.2 # 25