Incorporating Human Domain Knowledge in 3D LiDAR-based Semantic Segmentation

23 May 2019  ·  Jilin Mei, Huijing Zhao ·

This work studies semantic segmentation using 3D LiDAR data. Popular deep learning methods applied for this task require a large number of manual annotations to train the parameters. We propose a new method that makes full use of the advantages of traditional methods and deep learning methods via incorporating human domain knowledge into the neural network model to reduce the demand for large numbers of manual annotations and improve the training efficiency. We first pretrain a model with autogenerated samples from a rule-based classifier so that human knowledge can be propagated into the network. Based on the pretrained model, only a small set of annotations is required for further fine-tuning. Quantitative experiments show that the pretrained model achieves better performance than random initialization in almost all cases; furthermore, our method can achieve similar performance with fewer manual annotations.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here