Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks

Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments... (read more)

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Text-Image Retrieval COCO Oscar Recall@10 99.8 # 2
Image Captioning COCO Captions Oscar BLEU-4 41.7 # 1
METEOR 30.6 # 1
CIDER 140 # 1
SPICE 24.5 # 1
Text-Image Retrieval COCO (image as query) Oscar Recall@10 98.3 # 1
Visual Question Answering VQA v2 test-dev Oscar Accuracy 73.82 # 1

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet