TPCN: Temporal Point Cloud Networks for Motion Forecasting

CVPR 2021  ·  Maosheng Ye, Tongyi Cao, Qifeng Chen ·

We propose the Temporal Point Cloud Networks (TPCN), a novel and flexible framework with joint spatial and temporal learning for trajectory prediction. Unlike existing approaches that rasterize agents and map information as 2D images or operate in a graph representation, our approach extends ideas from point cloud learning with dynamic temporal learning to capture both spatial and temporal information by splitting trajectory prediction into both spatial and temporal dimensions. In the spatial dimension, agents can be viewed as an unordered point set, and thus it is straightforward to apply point cloud learning techniques to model agents' locations. While the spatial dimension does not take kinematic and motion information into account, we further propose dynamic temporal learning to model agents' motion over time. Experiments on the Argoverse motion forecasting benchmark show that our approach achieves the state-of-the-art results.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Motion Forecasting Argoverse CVPR 2020 TPCN MR (K=6) 0.1333 # 245
minADE (K=1) 1.5752 # 281
minFDE (K=1) 3.4872 # 276
MR (K=1) 0.5601 # 262
minADE (K=6) 0.8153 # 259
minFDE (K=6) 1.2442 # 254
DAC (K=6) 0.9884 # 46
brier-minFDE (K=6) 1.9286 # 52

Methods


No methods listed for this paper. Add relevant methods here