3D Multi-Object Tracking
23 papers with code • 5 benchmarks • 7 datasets
Image: Weng et al
Datasets
Most implemented papers
Center-based 3D Object Detection and Tracking
Three-dimensional objects are commonly represented as 3D boxes in a point-cloud.
Probabilistic 3D Multi-Object Tracking for Autonomous Driving
Our method estimates the object states by adopting a Kalman Filter.
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World
Finally, we use a pre-rendered sparse viewpoint model to create a joint posterior probability for the object pose.
SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking
3D multi-object tracking (MOT) has witnessed numerous novel benchmarks and approaches in recent years, especially those under the "tracking-by-detection" paradigm.
FANTrack: 3D Multi-Object Tracking with Feature Association Network
Instead, we exploit the power of deep learning to formulate the data association problem as inference in a CNN.
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
Additionally, 3D MOT datasets such as KITTI evaluate MOT methods in the 2D space and standardized 3D MOT evaluation tools are missing for a fair comparison of 3D MOT methods.
GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning
As a result, the feature of one object is informed of the features of other objects so that the object feature can lean towards the object with similar feature (i. e., object probably with a same ID) and deviate from objects with dissimilar features (i. e., object probably with different IDs), leading to a more discriminative feature for each object; (2) instead of obtaining the feature from either 2D or 3D space in prior work, we propose a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously.
GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking with Multi-Feature Learning
As a result, the feature of one object is informed of the features of other objects so that the object feature can lean towards the object with similar feature (i. e., object probably with a same ID) and deviate from objects with dissimilar features (i. e., object probably with different IDs), leading to a more discriminative feature for each object; (2) instead of obtaining the feature from either 2D or 3D space in prior work, we propose a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously.