VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending

22 May 2023  ยท  Xingjian He, Sihan Chen, Fan Ma, Zhicheng Huang, Xiaojie Jin, Zikang Liu, Dongmei Fu, Yi Yang, Jing Liu, Jiashi Feng ยท

Large-scale image-text contrastive pre-training models, such as CLIP, have been demonstrated to effectively learn high-quality multimodal representations. However, there is limited research on learning video-text representations for general video multimodal tasks based on these powerful features. Towards this goal, we propose a novel video-text pre-training method dubbed VLAB: Video Language pre-training by feature Adapting and Blending, which transfers CLIP representations to video pre-training tasks and develops unified video multimodal models for a wide range of video-text tasks. Specifically, VLAB is founded on two key strategies: feature adapting and feature blending. In the former, we introduce a new video adapter module to address CLIP's deficiency in modeling temporal information and extend the model's capability to encompass both contrastive and generative tasks. In the latter, we propose an end-to-end training method that further enhances the model's performance by exploiting the complementarity of image and video features. We validate the effectiveness and versatility of VLAB through extensive experiments on highly competitive video multimodal tasks, including video text retrieval, video captioning, and video question answering. Remarkably, VLAB outperforms competing methods significantly and sets new records in video question answering on MSRVTT, MSVD, and TGIF datasets. It achieves an accuracy of 49.6, 61.0, and 79.0, respectively. Codes and models will be released.

PDF Abstract

Results from the Paper


 Ranked #1 on Visual Question Answering (VQA) on MSVD-QA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Retrieval DiDeMo VLAB text-to-video R@1 56.8 # 10
text-to-video R@5 81.6 # 9
text-to-video R@10 88.7 # 9
Video Retrieval MSR-VTT VLAB text-to-video R@1 55.1 # 8
text-to-video R@5 78.8 # 5
text-to-video R@10 87.6 # 3
Video Captioning MSR-VTT VLAB CIDEr 74.9 # 4
METEOR 33.4 # 3
ROUGE-L 68.3 # 2
BLEU-4 54.6 # 4
Visual Question Answering (VQA) MSRVTT-QA VLAB Accuracy 0.496 # 1
Video Captioning MSVD VLAB CIDEr 179.8 # 2
BLEU-4 79.3 # 2
METEOR 51.2 # 1
ROUGE-L 87.9 # 1
Video Retrieval MSVD VLAB text-to-video R@1 57.5 # 6
text-to-video R@5 83.6 # 4
text-to-video R@10 89.9 # 3
Visual Question Answering (VQA) MSVD-QA VLAB Accuracy 0.61 # 1
TGIF-Frame TGIF-QA VLAB Accuracy 79.0 # 3

Methods


Adapter โ€ข CLIP