Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs

CVPR 2018  ·  Loic Landrieu, Martin Simonovsky ·

We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Semantic Segmentation DALES SPG mIoU 60.6 # 6
Overall Accuracy 95.5 # 7
Model size 280K # 1
Semantic Segmentation S3DIS SPG Mean IoU 62.1 # 40
mAcc 73 # 23
oAcc 85.5 # 30
Number of params 0.290M # 37
Params (M) 0.29 # 15
Semantic Segmentation S3DIS Area5 SPG mIoU 58.04 # 43
oAcc 86.38 # 29
mAcc 66.5 # 32
Number of params 280K # 2
Semantic Segmentation Semantic3D SPG mIoU 76.2% # 5
oAcc 92.9% # 5
mIoU 73.2% # 8
3D Semantic Segmentation SemanticKITTI SPGraph test mIoU 17.4% # 38
3D Semantic Segmentation SensatUrban SPGraph mIoU 37.29 # 6

Methods


No methods listed for this paper. Add relevant methods here