VirTex: Learning Visual Representations from Textual Annotations

CVPR 2021  ยท  Karan Desai, Justin Johnson ยท

The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


 Ranked #1 on Object Detection on COCO test-dev (Hardware Burden metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Captioning COCO Captions Virtex (ResNet-101) CIDER 94 # 31
SPICE 18.5 # 26
Object Detection COCO minival VirTex Mask R-CNN (ResNet-50-FPN) box AP 40.9 # 156
Object Detection COCO test-dev VirTex Mask R-CNN (ResNet-50-FPN) AP50 61.7 # 117
AP75 44.8 # 124
Hardware Burden None # 1
Operations per network pass None # 1
Instance Segmentation COCO test-dev VirTex Mask R-CNN (ResNet-50-FPN) mask AP 36.9 # 93
AP50 58.4 # 33
AP75 39.7 # 28

Methods