Omni-DETR: Omni-Supervised Object Detection with Transformers

We consider the problem of omni-supervised object detection, which can use unlabeled, fully labeled and weakly labeled annotations, such as image tags, counts, points, etc., for object detection. This is enabled by a unified architecture, Omni-DETR, based on the recent progress on student-teacher framework and end-to-end transformer based object detection. Under this unified architecture, different types of weak labels can be leveraged to generate accurate pseudo labels, by a bipartite matching based filtering mechanism, for the model to learn. In the experiments, Omni-DETR has achieved state-of-the-art results on multiple datasets and settings. And we have found that weak annotations can help to improve detection performance and a mixture of them can achieve a better trade-off between annotation cost and accuracy than the standard complete annotation. These findings could encourage larger object detection datasets with mixture annotations. The code is available at

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Object Detection COCO 10% labeled data Omni-DETR mAP 34.1 # 16
Semi-Supervised Object Detection COCO 1% labeled data Omni-DETR mAP 18.6 # 19
Semi-Supervised Object Detection COCO 2% labeled data Omni-DETR mAP 23.2 # 14
Semi-Supervised Object Detection COCO 5% labeled data Omni-DETR mAP 30.2 # 17


No methods listed for this paper. Add relevant methods here