Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection

Modern object detectors have taken the advantages of backbone networks pre-trained on large scale datasets. Except for the backbone networks, however, other components such as the detector head and the feature pyramid network (FPN) remain trained from scratch, which hinders fully tapping the potential of representation models. In this study, we propose to integrally migrate pre-trained transformer encoder-decoders (imTED) to a detector, constructing a feature extraction path which is ``fully pre-trained" so that detectors' generalization capacity is maximized. The essential differences between imTED with the baseline detector are twofold: (1) migrating the pre-trained transformer decoder to the detector head while removing the randomly initialized FPN from the feature extraction path; and (2) defining a multi-scale feature modulator (MFM) to enhance scale adaptability. Such designs not only reduce randomly initialized parameters significantly but also unify detector training with representation learning intendedly. Experiments on the MS COCO object detection dataset show that imTED consistently outperforms its counterparts by $\sim$2.4 AP. Without bells and whistles, imTED improves the state-of-the-art of few-shot object detection by up to 7.6 AP. Code is available at https://github.com/LiewFeng/imTED.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Object Detection MS-COCO (10-shot) imTED+ViT-B AP 22.5 # 4
Few-Shot Object Detection MS-COCO (10-shot) imTED+ViT-S AP 15.0 # 15
Few-Shot Object Detection MS-COCO (30-shot) imTED+ViT-B AP 30.2 # 3
Few-Shot Object Detection MS-COCO (30-shot) imTED+ViT-S AP 21.0 # 11

Methods