RangeNet++: Fast and Accurate LiDAR Semantic Segmentation

Perception in autonomous vehicles is often carried out through a suite of different sensing modalities. Given the massive amount of openly available labeled RGB data and the advent of high-quality deep learning algorithms for image-based recognition, high-level semantic perception tasks are pre-dominantly solved using high-resolution cameras. As a result of that, other sensor modalities potentially useful for this task are often ignored. In this paper, we push the state of the art in LiDAR-only semantic segmentation forward in order to provide another independent source of semantic information to the vehicle. Our approach can accurately perform full semantic segmentation of LiDAR point clouds at sensor frame rate. We exploit range images as an intermediate representation in combination with a Convolutional Neural Network (CNN) exploiting the rotating LiDAR sensor model. To obtain accurate results, we propose a novel post-processing algorithm that deals with problems arising from this intermediate representation such as discretization errors and blurry CNN outputs. We implemented and thoroughly evaluated our approach including several comparisons to the state of the art. Our experiments show that our approach outperforms state-of-the-art approaches, while still running online on a single embedded GPU. The code can be accessed at https://github.com/PRBonn/lidar-bonnetal

PDF
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Semantic Segmentation SemanticKITTI RangeNet++ test mIoU 52.2% # 31
Robust 3D Semantic Segmentation SemanticKITTI-C RangeNet-53 (64x2048) mean Corruption Error (mCE) 130.66% # 19
Robust 3D Semantic Segmentation SemanticKITTI-C RangeNet-21 (64x2048) mean Corruption Error (mCE) 136.33% # 20

Methods


No methods listed for this paper. Add relevant methods here