MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection

Scale variation across object instances remains a key challenge in object detection task. Despite the remarkable progress made by modern detection models, this challenge is particularly evident in the semi-supervised case. While existing semi-supervised object detection methods rely on strict conditions to filter high-quality pseudo labels from network predictions, we observe that objects with extreme scale tend to have low confidence, resulting in a lack of positive supervision for these objects. In this paper, we propose a novel framework that addresses the scale variation problem by introducing a mixed scale teacher to improve pseudo label generation and scale-invariant learning. Additionally, we propose mining pseudo labels using score promotion of predictions across scales, which benefits from better predictions from mixed scale features. Our extensive experiments on MS COCO and PASCAL VOC benchmarks under various semi-supervised settings demonstrate that our method achieves new state-of-the-art performance. The code and models are available at \url{https://github.com/lliuz/MixTeacher}.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Object Detection COCO 10% labeled data MixTeacher-FRCNN mAP 36.72 # 9
detector FRCNN-Res50 # 1
Semi-Supervised Object Detection COCO 10% labeled data MixTeacher-FCOS mAP 36.95 # 8
detector FCOS-Res50 # 1
Semi-Supervised Object Detection COCO 1% labeled data MixTeacher-FCOS mAP 23.83 # 9
Semi-Supervised Object Detection COCO 1% labeled data MixTeacher-FRCNN mAP 25.16 # 7
Semi-Supervised Object Detection COCO 2% labeled data MixTeacher-FRCNN mAP 29.11 # 3
Semi-Supervised Object Detection COCO 2% labeled data MixTeacher-FCOS mAP 27.88 # 8
Semi-Supervised Object Detection COCO 5% labeled data MixTeacher-FCOS mAP 33.42 # 6
Semi-Supervised Object Detection COCO 5% labeled data MixTeacher-FRCNN mAP 34.06 # 5

Methods


No methods listed for this paper. Add relevant methods here