MotionTrack: Learning Motion Predictor for Multiple Object Tracking

5 Jun 2023  ·  Changcheng Xiao, Qiong Cao, Yujie Zhong, Long Lan, Xiang Zhang, Zhigang Luo, DaCheng Tao ·

Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT. In this context, we introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor that relies solely on object trajectory information. This predictor comprehensively integrates two levels of granularity in motion features to enhance the modeling of temporal dynamics and facilitate precise future motion prediction for individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT, characterized by highly complex object motion.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multi-Object Tracking DanceTrack MotionTrack HOTA 58.2 # 15
DetA 81.4 # 9
AssA 41.7 # 14
MOTA 91.3 # 10
IDF1 58.6 # 15
Multi-Object Tracking SportsMOT MotionTrack HOTA 74.0 # 4
IDF1 88.8 # 1
AssA 61.7 # 4
MOTA 96.6 # 2
DetA 74.0 # 12

Methods


No methods listed for this paper. Add relevant methods here