MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors

CVPR 2023  ·  Yuang Zhang, Tiancai Wang, Xiangyu Zhang ·

In this paper, we propose MOTRv2, a simple yet effective pipeline to bootstrap end-to-end multi-object tracking with a pretrained object detector. Existing end-to-end methods, MOTR and TrackFormer are inferior to their tracking-by-detection counterparts mainly due to their poor detection performance. We aim to improve MOTR by elegantly incorporating an extra object detector. We first adopt the anchor formulation of queries and then use an extra object detector to generate proposals as anchors, providing detection prior to MOTR. The simple modification greatly eases the conflict between joint learning detection and association tasks in MOTR. MOTRv2 keeps the query propogation feature and scales well on large-scale benchmarks. MOTRv2 ranks the 1st place (73.4% HOTA on DanceTrack) in the 1st Multiple People Tracking in Group Dance Challenge. Moreover, MOTRv2 reaches state-of-the-art performance on the BDD100K dataset. We hope this simple and effective pipeline can provide some new insights to the end-to-end MOT community. Code is available at \url{https://github.com/megvii-research/MOTRv2}.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract

Results from the Paper


Ranked #2 on Multi-Object Tracking on DanceTrack (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multiple Object Tracking BDD100K val MOTRv2 mMOTA 43.6 # 3
mIDF1 56.5 # 2
Multi-Object Tracking DanceTrack MOTRv2 HOTA 73.4 # 2
DetA 83.7 # 2
AssA 64.4 # 2
MOTA 92.1 # 4
IDF1 76.0 # 3

Methods


No methods listed for this paper. Add relevant methods here