The Multi-Object and Segmentation (MOTS) benchmark 2 consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation 2012 and extends the annotations to the Multi-Object and Segmentation (MOTS) task. To this end, we added dense pixel-wise segmentation labels for every object. We evaluate submitted results using the metrics HOTA, CLEAR MOT, and MT/PT/ML. We rank methods by HOTA 1. (adapted for the segmentation case). Evaluation is performed using the code from the TrackEval repository. 1 J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.
26 PAPERS • 1 BENCHMARK
The Waymo Open Dataset currently contains 1,950 segments. The authors plan to grow this dataset in the future. Currently the datasets includes: 1,950 segments of 20s each, collected at 10Hz (390,000 frames) in diverse geographies and conditions Sensor data 1 mid-range lidar 4 short-range lidars 5 cameras (front data Lidar to camera projections Sensor calibrations and vehicle poses Labeled data Labels for 4 object classes - Vehicles, Pedestrians, Cyclists, Signs High-quality labels for lidar data in 1,200 segments 12.6M 3D bounding box labels with tracking IDs on lidar data High-quality labels for camera data in 1,000 segments 11.8M 2D bounding box labels with tracking IDs on camera data
378 PAPERS • 12 BENCHMARKS
…Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities.
3,233 PAPERS • 141 BENCHMARKS