Grounded Language-Image Pre-training

This paper presents a grounded language-image pre-training (GLIP) model for learning object-level, language-aware, and semantic-rich visual representations. GLIP unifies object detection and phrase grounding for pre-training. The unification brings two benefits: 1) it allows GLIP to learn from both detection and grounding data to improve both tasks and bootstrap a good grounding model; 2) GLIP can leverage massive image-text pairs by generating grounding boxes in a self-training fashion, making the learned representation semantic-rich. In our experiments, we pre-train GLIP on 27M grounding data, including 3M human-annotated and 24M web-crawled image-text pairs. The learned representations demonstrate strong zero-shot and few-shot transferability to various object-level recognition tasks. 1) When directly evaluated on COCO and LVIS (without seeing any images in COCO during pre-training), GLIP achieves 49.8 AP and 26.9 AP, respectively, surpassing many supervised baselines. 2) After fine-tuned on COCO, GLIP achieves 60.8 AP on val and 61.5 AP on test-dev, surpassing prior SoTA. 3) When transferred to 13 downstream object detection tasks, a 1-shot GLIP rivals with a fully-supervised Dynamic Head. Code will be released at

PDF Abstract

Results from the Paper

 Ranked #1 on Phrase Grounding on Flickr30k Entities Test (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Object Detection COCO minival GLIP (Swin-L, multi-scale) box AP 60.8 # 2
Object Detection COCO test-dev GLIP (Swin-L, multi-scale) box AP 61.5 # 3
AP50 79.5 # 1
AP75 67.7 # 1
APS 45.3 # 1
APM 64.9 # 1
APL 75.0 # 1
Phrase Grounding Flickr30k Entities Test GLIP R@1 87.1 # 1
R@10 98.1 # 1
R@5 96.9 # 1


No methods listed for this paper. Add relevant methods here