3D Multi-Object Tracking

23 papers with code • 5 benchmarks • 7 datasets

Image: Weng et al

Most implemented papers

Center-based 3D Object Detection and Tracking

tianweiy/CenterPoint CVPR 2021

Three-dimensional objects are commonly represented as 3D boxes in a point-cloud.

Probabilistic 3D Multi-Object Tracking for Autonomous Driving

eddyhkchiu/mahalanobis_3d_multi_object_tracking 16 Jan 2020

Our method estimates the object states by adopting a Kalman Filter.

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion

aleksandrkim61/EagerMOT 29 Apr 2021

Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.

Exploring Simple 3D Multi-Object Tracking for Autonomous Driving

qcraftai/simtrack ICCV 2021

3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.

SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World

dlr-rm/3dobjecttracking 25 Oct 2021

Finally, we use a pre-rendered sparse viewpoint model to create a joint posterior probability for the object pose.

SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking

tusimple/simpletrack 18 Nov 2021

3D multi-object tracking (MOT) has witnessed numerous novel benchmarks and approaches in recent years, especially those under the "tracking-by-detection" paradigm.

FANTrack: 3D Multi-Object Tracking with Feature Association Network

wise-lab/fantrack 7 May 2019

Instead, we exploit the power of deep learning to formulate the data association problem as inference in a CNN.

3D Multi-Object Tracking: A Baseline and New Evaluation Metrics

xinshuoweng/AB3DMOT 9 Jul 2019

Additionally, 3D MOT datasets such as KITTI evaluate MOT methods in the 2D space and standardized 3D MOT evaluation tools are missing for a fair comparison of 3D MOT methods.

GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning

xinshuoweng/GNN3DMOT CVPR 2020

As a result, the feature of one object is informed of the features of other objects so that the object feature can lean towards the object with similar feature (i. e., object probably with a same ID) and deviate from objects with dissimilar features (i. e., object probably with different IDs), leading to a more discriminative feature for each object; (2) instead of obtaining the feature from either 2D or 3D space in prior work, we propose a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously.

GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking with Multi-Feature Learning

xinshuoweng/GNN3DMOT 12 Jun 2020

As a result, the feature of one object is informed of the features of other objects so that the object feature can lean towards the object with similar feature (i. e., object probably with a same ID) and deviate from objects with dissimilar features (i. e., object probably with different IDs), leading to a more discriminative feature for each object; (2) instead of obtaining the feature from either 2D or 3D space in prior work, we propose a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously.