We introduce BuildingNet: (a) a large-scale dataset of 3D building models whose exteriors are consistently labeled, (b) a graph neural network that labels building meshes by analyzing spatial and structural relations of their geometric primitives. To create our dataset, we used crowdsourcing combined with expert guidance, resulting in 513K annotated mesh primitives, grouped into 292K semantic part components across 2K building models. The dataset covers several building categories, such as houses, churches, skyscrapers, town halls, libraries, and castles. We include a benchmark for evaluating mesh and point cloud labeling. Buildings have more challenging structural complexity compared to objects in existing benchmarks (e.g., ShapeNet, PartNet), thus, we hope that our dataset can nurture the development of algorithms that are able to cope with such large-scale geometric data for both vision and graphics tasks e.g., 3D semantic segmentation, part-based generative models, correspondences, texturing, and analysis of point cloud data acquired from real-world buildings. Finally, we show that our mesh-based graph neural network significantly improves performance over several baselines for labeling 3D meshes.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Introduced in the Paper:

BuildingNet

Used in the Paper:

ShapeNet
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Building Mesh Labeling BuildingNet-Mesh BuildingGNN-MinkNet Part IoU 42.6 # 1
Shape IoU 46.8 # 1
Class Accuracy 77.8 # 1
3D Building Mesh Labeling BuildingNet-Mesh BuildingGNN-PointNet++ Part IoU 31.5 # 2
Shape IoU 35.9 # 2
Class Accuracy 73.9 # 2

Methods