Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking

Panoptic scene understanding and tracking of dynamic agents are essential for robots and automated vehicles to navigate in urban environments. As LiDARs provide accurate illumination-independent geometric depictions of the scene, performing these tasks using LiDAR point clouds provides reliable predictions. However, existing datasets lack diversity in the type of urban scenes and have a limited number of dynamic object instances which hinders both learning of these tasks as well as credible benchmarking of the developed methods. In this paper, we introduce the large-scale Panoptic nuScenes benchmark dataset that extends our popular nuScenes dataset with point-wise groundtruth annotations for semantic segmentation, panoptic segmentation, and panoptic tracking tasks. To facilitate comparison, we provide several strong baselines for each of these tasks on our proposed dataset. Moreover, we analyze the drawbacks of the existing metrics for panoptic tracking and propose the novel instance-centric PAT metric that addresses the concerns. We present exhaustive experiments that demonstrate the utility of Panoptic nuScenes compared to existing datasets and make the online evaluation server available at nuScenes.org. We believe that this extension will accelerate the research of novel methods for scene understanding of dynamic urban environments.

PDF Abstract

Datasets


Introduced in the Paper:

Panoptic nuScenes

Used in the Paper:

nuScenes SemanticKITTI A2D2
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Panoptic Segmentation Panoptic nuScenes test (AF)2-S3Net + CenterPoint PQ 76.8 # 1
SQ 89.5 # 1
RQ 85.4 # 1
mIoU 78.8 # 1
Panoptic Tracking Panoptic nuScenes test EfficientLPS + Kalman Filter PAT 67.1 # 2
LSTQ 63.7 # 2
PTQ 62.3 # 2
Panoptic Tracking Panoptic nuScenes val EfficientLPS + Kalman Filter PAT 64.6 # 2
LSTQ 62 # 2
PTQ 60.6 # 2
Panoptic Segmentation Panoptic nuScenes val PolarSeg-Panoptic PQ 63.4 # 1
SQ 83.9 # 1
RQ 75.3 # 1
mIoU 66.9 # 1

Methods


No methods listed for this paper. Add relevant methods here