What Can be Seen is What You Get: Structure Aware Point Cloud Augmentation

20 Jun 2022  ·  Frederik Hasecke, Martin Alsfasser, Anton Kummert ·

To train a well performing neural network for semantic segmentation, it is crucial to have a large dataset with available ground truth for the network to generalize on unseen data. In this paper we present novel point cloud augmentation methods to artificially diversify a dataset. Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities. Due to these new methods, we are able to enrich low-value data with high-value instances, as well as create entirely new scenes. We validate our methods on multiple neural networks with the public SemanticKITTI dataset and demonstrate that all networks improve compared to their respective baseline. In addition, we show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Semantic Segmentation SemanticKITTI SAPCA (Cylinder3D) mIoU (1% Labels) 50.9 # 2
mIoU (10% Labels) 64.0 # 1
mIoU (50% Labels) 64.9 # 2

Methods


No methods listed for this paper. Add relevant methods here