Paper

Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors

Unsupervised pre-training aims at learning transferable features that are beneficial for downstream tasks. However, most state-of-the-art unsupervised methods concentrate on learning global representations for image-level classification tasks instead of discriminative local region representations, which limits their transferability to region-level downstream tasks, such as object detection. To improve the transferability of pre-trained features to object detection, we present Deeply Unsupervised Patch Re-ID (DUPR), a simple yet effective method for unsupervised visual representation learning. The patch Re-ID task treats individual patch as a pseudo-identity and contrastively learns its correspondence in two views, enabling us to obtain discriminative local features for object detection. Then the proposed patch Re-ID is performed in a deeply unsupervised manner, appealing to object detection, which usually requires multilevel feature maps. Extensive experiments demonstrate that DUPR outperforms state-of-the-art unsupervised pre-trainings and even the ImageNet supervised pre-training on various downstream tasks related to object detection.

Results in Papers With Code
(↓ scroll down to see all results)