PointOdyssey: A Large-Scale Synthetic Dataset for Long-Term Point Tracking

We introduce PointOdyssey, a large-scale synthetic dataset, and data generation framework, for the training and evaluation of long-term fine-grained tracking algorithms. Our goal is to advance the state-of-the-art by placing emphasis on long videos with naturalistic motion. Toward the goal of naturalism, we animate deformable characters using real-world motion capture data, we build 3D scenes to match the motion capture environments, and we render camera viewpoints using trajectories mined via structure-from-motion on real videos. We create combinatorial diversity by randomizing character appearance, motion profiles, materials, lighting, 3D assets, and atmospheric effects. Our dataset currently includes 104 videos, averaging 2,000 frames long, with orders of magnitude more correspondence annotations than prior work. We show that existing methods can be trained from scratch in our dataset and outperform the published variants. Finally, we introduce modifications to the PIPs point tracking method, greatly widening its temporal receptive field, which improves its performance on PointOdyssey as well as on two real-world benchmarks. Our data and code are publicly available at: https://pointodyssey.com

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Point Tracking PointOdyssey PIPs+ δ 32.41 # 2
Survival 49.88 # 2
Point Tracking PointOdyssey PIPs++ MTE 26.95 # 1
δ 33.64 # 1
Survival 50.47 # 1
Point Tracking TAP-Vid PIPs++ MTE 4.6 # 1
δ 63.45 # 1
Survival 88.42 # 1

Methods


No methods listed for this paper. Add relevant methods here