Transitional Adaptation of Pretrained Models for Visual Storytelling

Previous models for vision-to-language generation tasks usually pretrain a visual encoder and a language generator in the respective domains and jointly finetune them with the target task. However, this direct transfer practice may suffer from the discord between visual specificity and language fluency since they are often separately trained from large corpora of visual and text data with no common ground. In this work, we claim that a transitional adaptation task is required between pretraining and finetuning to harmonize the visual encoder and the language model for challenging downstream target tasks like visual storytelling. We propose a novel approach named Transitional Adaptation of Pretrained Model (TAPM) that adapts the multi-modal modules to each other with a simpler alignment task between visual inputs only with no need for text labels. Through extensive experiments, we show that the adaptation step significantly improves the performance of multiple language models for sequential video and image captioning tasks. We achieve new state-of-the-art performance on both language metrics and human evaluation in the multi-sentence description task of LSMDC 2019 and the image storytelling task of VIST. Our experiments reveal that this improvement in caption quality does not depend on the specific choice of language models.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Visual Storytelling on VIST (ROUGE-L metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Visual Storytelling VIST TAPM (no V&L) METEOR 34.1 # 26
CIDEr 8.3 # 22
ROUGE-L 30.2 # 7
Visual Storytelling VIST TAPM METEOR 37.2 # 2
CIDEr 13.8 # 2
ROUGE-L 33.1 # 1

Methods


No methods listed for this paper. Add relevant methods here