MCBLT: Multi-Camera Multi-Object 3D Tracking in Long Videos

Object perception from multi-view cameras is crucial for intelligent systems, particularly in indoor environments, e.g., warehouses, retail stores, and hospitals. Most traditional multi-target multi-camera (MTMC) detection and tracking methods rely on 2D object detection, single-view multi-object tracking (MOT), and cross-view re-identification (ReID) techniques, without properly handling important 3D information by multi-view image aggregation. In this paper, we propose a 3D object detection and tracking framework, named MCBLT, which first aggregates multi-view images with necessary camera calibration parameters to obtain 3D object detections in bird's-eye view (BEV). Then, we introduce hierarchical graph neural networks (GNNs) to track these 3D detections in BEV for MTMC tracking results. Unlike existing methods, MCBLT has impressive generalizability across different scenes and diverse camera settings, with exceptional capability for long-term association handling. As a result, our proposed MCBLT establishes a new state-of-the-art on the AICity'24 dataset with $81.22$ HOTA, and on the WildTrack dataset with $95.6$ IDF1.

PDF Abstract

Results from the Paper


 Ranked #1 on Multi-Object Tracking on Wildtrack (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multi-Object Tracking 2024 AI City Challenge BEV-SUSHI HOTA 81.22 # 1
DetA 86.94 # 1
AssA 76.19 # 1
LocA 95.67 # 1
Multi-Object Tracking Wildtrack BEV-SUSHI IDF1 95.6 # 1
MOTA 92.6 # 1

Methods


No methods listed for this paper. Add relevant methods here