mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections

Large-scale pretrained foundation models have been an emerging paradigm for building artificial intelligence (AI) systems, which can be quickly adapted to a wide range of downstream tasks. This paper presents mPLUG, a new vision-language foundation model for both cross-modal understanding and generation. Most existing pre-trained models suffer from the problems of low computational efficiency and information asymmetry brought by the long visual sequence in cross-modal alignment. To address these problems, mPLUG introduces an effective and efficient vision-language architecture with novel cross-modal skip-connections, which creates inter-layer shortcuts that skip a certain number of layers for time-consuming full self-attention on the vision side. mPLUG is pre-trained end-to-end on large-scale image-text pairs with both discriminative and generative objectives. It achieves state-of-the-art results on a wide range of vision-language downstream tasks, such as image captioning, image-text retrieval, visual grounding and visual question answering. mPLUG also demonstrates strong zero-shot transferability when directly transferred to multiple video-language tasks.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Captioning COCO Captions mPLUG BLEU-4 46.5 # 1
METEOR 32.0 # 5
CIDER 155.1 # 1
SPICE 26.0 # 4
Visual Question Answering (VQA) VQA v2 test-dev mPLUG (Huge) Accuracy 82.43 # 5
Visual Question Answering (VQA) VQA v2 test-std mPLUG-Huge overall 83.62 # 2
yes/no 94.83 # 2
number 69.82 # 2
other 77.02 # 1

Methods


No methods listed for this paper. Add relevant methods here