Scaling Up Vision-Language Pre-training for Image Captioning

In recent years, we have witnessed significant performance boost in the image captioning task based on vision-language pre-training (VLP). Scale is believed to be an important factor for this advance. However, most existing work only focuses on pre-training transformers with moderate sizes (e.g., 12 or 24 layers) on roughly 4 million images. In this paper, we present LEMON, a LargE-scale iMage captiONer, and provide the first empirical study on the scaling behavior of VLP for image captioning. We use the state-of-the-art VinVL model as our reference model, which consists of an image feature extractor and a transformer model, and scale the transformer both up and down, with model sizes ranging from 13 to 675 million parameters. In terms of data, we conduct experiments with up to 200 million image-text pairs which are automatically collected from web based on the alt attribute of the image (dubbed as ALT200M). Extensive analysis helps to characterize the performance trend as the model size and the pre-training data size increase. We also compare different training recipes, especially for training on large-scale noisy data. As a result, LEMON achieves new state of the arts on several major image captioning benchmarks, including COCO Caption, nocaps, and Conceptual Captions. We also show LEMON can generate captions with long-tail visual concepts when used in a zero-shot manner.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Captioning COCO Captions LEMON BLEU-4 42.6 # 2
METEOR 31.4 # 3
CIDER 145.5 # 2
SPICE 25.5 # 2
Image Captioning nocaps-val-in-domain LEMON_base CIDEr 107.7 # 5
SPICE 14.7 # 4
Pre-train (#images) 200M # 6
Image Captioning nocaps-val-in-domain LEMON_large CIDEr 116.9 # 1
SPICE 15.8 # 1
Pre-train (#images) 200M # 6
Image Captioning nocaps-val-near-domain LEMON_large CIDEr 113.3 # 1
SPICE 15.1 # 1
Pre-train (#images) 200M # 5
Image Captioning nocaps-val-out-domain LEMON_large CIDEr 111.3 # 4
SPICE 14.0 # 3
Pretrain (#images) 200M # 5
Image Captioning nocaps-val-overall LEMON_large CIDEr 113.4 # 1
SPICE 15.0 # 1
Pretrain (#images) 200M # 5
Image Captioning nocaps-XD entire Microsoft Cognitive Services team CIDEr 114.25 # 2
B1 85.62 # 2
B2 71.36 # 2
B3 53.62 # 2
B4 34.65 # 2
ROUGE-L 61.2 # 2
METEOR 31.27 # 2
SPICE 14.85 # 2

Methods


No methods listed for this paper. Add relevant methods here