ODMS (Object Depth via Motion and Segmentation)

Introduced by Griffin et al. in Learning Object Depth from Camera Motion and Video Object Segmentation

ODMS is a dataset for learning Object Depth via Motion and Segmentation. ODMS training data are configurable and extensible, with each training example consisting of a series of object segmentation masks, camera movement distances, and ground truth object depth. As a benchmark evaluation, the dataset provides four ODMS validation and test sets with 15,650 examples in multiple domains, including robotics and driving.

Source: https://github.com/griffbr/ODMS

Papers


Paper Code Results Date Stars

Dataset Loaders


Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages