SimVLM: Simple Visual Language Model Pretraining with Weak Supervision

With recent progress in joint modeling of visual and textual representations, Vision-Language Pretraining (VLP) has achieved impressive performance on many multimodal downstream tasks. However, the requirement for expensive annotations including clean image captions and regional labels limits the scalability of existing approaches, and complicates the pretraining procedure with the introduction of multiple dataset-specific objectives. In this work, we relax these constraints and present a minimalist pretraining framework, named Simple Visual Language Model (SimVLM). Unlike prior work, SimVLM reduces the training complexity by exploiting large-scale weak supervision, and is trained end-to-end with a single prefix language modeling objective. Without utilizing extra data or task-specific customization, the resulting model significantly outperforms previous pretraining methods and achieves new state-of-the-art results on a wide range of discriminative and generative vision-language benchmarks, including VQA (+3.74% vqa-score), NLVR2 (+1.17% accuracy), SNLI-VE (+1.37% accuracy) and image captioning tasks (+10.1% average CIDEr score). Furthermore, we demonstrate that SimVLM acquires strong generalization and transfer ability, enabling zero-shot behavior including open-ended visual question answering and cross-modality transfer.

PDF Abstract ICLR 2022 PDF ICLR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Captioning COCO Captions SimVLM BLEU-4 40.6 # 17
METEOR 33.4 # 2
CIDER 143.3 # 13
SPICE 25.4 # 7
Visual Reasoning NLVR2 Dev SimVLM Accuracy 84.53 # 8
Visual Reasoning NLVR2 Test SimVLM Accuracy 85.15 # 7
Image Captioning nocaps entire Single Model CIDEr 110.31 # 6
B1 83.78 # 6
B2 68.86 # 5
B3 51.06 # 5
B4 32.2 # 5
ROUGE-L 59.86 # 5
METEOR 30.55 # 6
SPICE 14.49 # 9
Image Captioning nocaps in-domain Single Model CIDEr 108.98 # 7
B1 84.64 # 6
B2 70.0 # 6
B3 52.96 # 6
B4 34.66 # 7
ROUGE-L 61.01 # 6
METEOR 31.97 # 6
SPICE 14.6 # 11
Image Captioning nocaps near-domain Single Model CIDEr 110.76 # 6
B1 84.36 # 7
B2 69.83 # 6
B3 52.42 # 6
B4 33.74 # 6
ROUGE-L 60.46 # 6
METEOR 30.97 # 7
SPICE 14.61 # 11
Image Captioning nocaps out-of-domain Single Model CIDEr 109.49 # 6
B1 80.89 # 7
B2 64.21 # 7
B3 44.38 # 7
B4 24.47 # 8
ROUGE-L 56.69 # 7
METEOR 27.91 # 7
SPICE 13.89 # 7
Image Captioning nocaps-val-in-domain SimVLM CIDEr 113.7 # 6
SPICE - # 11
Pre-train (#images) 1.8B # 1
Image Captioning nocaps-val-near-domain SimVLM CIDEr 110.9 # 6
SPICE - # 10
Pre-train (#images) 1.8B # 1
Image Captioning nocaps-val-out-domain SimVLM CIDEr 115.2 # 5
SPICE - # 10
Pretrain (#images) 1.8B # 1
Image Captioning nocaps-val-overall SimVLM CIDEr 112.2 # 6
SPICE - # 11
Pretrain (#images) 1.8B # 1
Visual Entailment SNLI-VE test SimVLM Accuracy 86.32 # 4
Visual Entailment SNLI-VE val SimVLM Accuracy 86.21 # 4
Visual Question Answering (VQA) VQA v2 test-dev SimVLM Accuracy 80.03 # 14
Visual Question Answering (VQA) VQA v2 test-std SimVLM overall 80.34 # 7

Methods