Region-centric Image-Language Pretraining for Open-Vocabulary Detection

29 Sep 2023  ·  Dahun Kim, Anelia Angelova, Weicheng Kuo ·

We present a new open-vocabulary detection approach based on region-centric image-language pretraining to bridge the gap between image-level pretraining and open-vocabulary object detection. At the pretraining phase, we incorporate the detector architecture on top of the classification backbone, which better serves the region-level recognition needs of detection by enabling the detector heads to learn from large-scale image-text pairs. Using only standard contrastive loss and no pseudo-labeling, our approach is a simple yet effective extension of the contrastive learning method to learn emergent object-semantic cues. In addition, we propose a shifted-window learning approach upon window attention to make the backbone representation more robust, translation-invariant, and less biased by the window pattern. On the popular LVIS open-vocabulary detection benchmark, our approach sets a new state of the art of 37.6 mask APr using the common ViT-L backbone and public LAION dataset, and 40.5 mask APr using the DataComp-1B dataset, significantly outperforming the best existing approach by +3.7 mask APr at system level. On the COCO benchmark, we achieve very competitive 39.6 novel AP without pseudo labeling or weak supervision. In addition, we evaluate our approach on the transfer detection setup, where it demonstrates notable improvement over the baseline. Visualization reveals emerging object locality from the pretraining recipes compared to the baseline.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Open Vocabulary Object Detection LVIS v1.0 DITO AP novel-LVIS base training 40.4 # 2
AP novel-Unrestricted open-vocabulary training 45.8 # 1
Open Vocabulary Object Detection MSCOCO DITO AP 0.5 46.1 # 4

Methods