OccuSeg: Occupancy-aware 3D Instance Segmentation

CVPR 2020  ·  Lei Han, Tian Zheng, Lan Xu, Lu Fang ·

3D instance segmentation, with a variety of applications in robotics and augmented reality, is in large demands these days. Unlike 2D images that are projective observations of the environment, 3D models provide metric reconstruction of the scenes without occlusion or scale ambiguity. In this paper, we define "3D occupancy size", as the number of voxels occupied by each instance. It owns advantages of robustness in prediction, on which basis, OccuSeg, an occupancy-aware 3D instance segmentation scheme is proposed. Our multi-task learning produces both occupancy signal and embedding representations, where the training of spatial and feature embeddings varies with their difference in scale-aware. Our clustering scheme benefits from the reliable comparison between the predicted occupancy size and the clustered occupancy size, which encourages hard samples being correctly clustered and avoids over segmentation. The proposed approach achieves state-of-the-art performance on 3 real-world datasets, i.e. ScanNetV2, S3DIS and SceneNN, while maintaining high efficiency.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Instance Segmentation ScanNet(v2) OccuSeg mAP 44.3 # 12
mAP @ 50 63.4 # 17
3D Instance Segmentation SceneNN OccuSeg mAP@0.5 47.1 # 1