Language-Grounded Indoor 3D Semantic Segmentation in the Wild

16 Apr 2022  ·  David Rozenberszki, Or Litany, Angela Dai ·

Recent advances in 3D semantic segmentation with deep neural networks have shown remarkable success, with rapid performance increase on available datasets. However, current 3D semantic segmentation benchmarks contain only a small number of categories -- less than 30 for ScanNet and SemanticKITTI, for instance, which are not enough to reflect the diversity of real environments (e.g., semantic image understanding covers hundreds to thousands of classes). Thus, we propose to study a larger vocabulary for 3D semantic segmentation with a new extended benchmark on ScanNet data with 200 class categories, an order of magnitude more than previously studied. This large number of class categories also induces a large natural class imbalance, both of which are challenging for existing 3D semantic segmentation methods. To learn more robust 3D features in this context, we propose a language-driven pre-training method to encourage learned 3D features that might have limited training examples to lie close to their pre-trained text embeddings. Extensive experiments show that our approach consistently outperforms state-of-the-art 3D pre-training for 3D semantic segmentation on our proposed benchmark (+9% relative mIoU), including limited-data scenarios with +25% relative mIoU using only 5% annotations.

PDF Abstract

Datasets


Introduced in the Paper:

ScanNet200

Used in the Paper:

ScanNet
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Semantic Segmentation ScanNet200 LGround val mIoU 28.8 # 7
test mIoU 27.2 # 6

Methods


No methods listed for this paper. Add relevant methods here