StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation

13 Sep 2022  ·  Adyasha Maharana, Darryl Hannan, Mohit Bansal ·

Recent advances in text-to-image synthesis have led to large pretrained transformers with excellent capabilities to generate visualizations from a given text. However, these models are ill-suited for specialized tasks like story visualization, which requires an agent to produce a sequence of images given a corresponding sequence of captions, forming a narrative. Moreover, we find that the story visualization task fails to accommodate generalization to unseen plots and characters in new narratives. Hence, we first propose the task of story continuation, where the generated visual story is conditioned on a source image, allowing for better generalization to narratives with new characters. Then, we enhance or 'retro-fit' the pretrained text-to-image synthesis models with task-specific modules for (a) sequential image generation and (b) copying relevant elements from an initial frame. Then, we explore full-model finetuning, as well as prompt-based tuning for parameter-efficient adaptation, of the pre-trained model. We evaluate our approach StoryDALL-E on two existing datasets, PororoSV and FlintstonesSV, and introduce a new dataset DiDeMoSV collected from a video-captioning dataset. We also develop a model StoryGANc based on Generative Adversarial Networks (GAN) for story continuation, and compare it with the StoryDALL-E model to demonstrate the advantages of our approach. We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image, thereby improving continuity in the generated visual story. Finally, our analysis suggests that pretrained transformers struggle to comprehend narratives containing several characters. Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Story Continuation FlintstonesSV StoryDALL-E FID 28.37 # 2
Char-F1 74.28 # 1
F-Acc 52.35 # 3
Story Continuation FlintstonesSV StoryDALL-E (Story Embeddings + Cross-Attention) FID 36.28 # 5
Char-F1 72.44 # 3
F-Acc 51.32 # 4
Story Continuation FlintstonesSV StoryDALL-E (Story Embeddings) FID 29.21 # 3
Char-F1 72.18 # 4
F-Acc 53.28 # 1
Story Continuation FlintstonesSV StoryDALL-E (Cross-Attention) FID 35.04 # 4
Char-F1 73.94 # 2
F-Acc 52.72 # 2
Story Continuation PororoSV StoryDALL-E FID 21.64 # 2
Char-F1 40.28 # 1
F-Acc 20.94 # 2
Story Continuation PororoSV StoryDALL-E (Story Embeddings + Cross-Attention) FID 31.68 # 5
Char-F1 35.29 # 4
F-Acc 16.73 # 4
Story Continuation PororoSV StoryDALL-E (Cross-Attention) FID 23.27 # 3
Char-F1 40.25 # 2
F-Acc 18.16 # 3
Story Continuation PororoSV StoryDALL-E (Story Embeddings) FID 30.45 # 4
Char-F1 39.32 # 3
F-Acc 34.65 # 1

Methods


No methods listed for this paper. Add relevant methods here