Learning 3D Semantic Scene Graphs from 3D Indoor Reconstructions

Scene understanding has been of high interest in computer vision. It encompasses not only identifying objects in a scene, but also their relationships within the given context. With this goal, a recent line of works tackles 3D semantic segmentation and scene layout prediction. In our work we focus on scene graphs, a data structure that organizes the entities of a scene in a graph, where objects are nodes and their relationships modeled as edges. We leverage inference on scene graphs as a way to carry out 3D scene understanding, mapping objects and their relationships. In particular, we propose a learned method that regresses a scene graph from the point cloud of a scene. Our novel architecture is based on PointNet and Graph Convolutional Networks (GCN). In addition, we introduce 3DSSG, a semi-automatically generated dataset, that contains semantically rich scene graphs of 3D scenes. We show the application of our method in a domain-agnostic retrieval task, where graphs serve as an intermediate representation for 3D-3D and 2D-3D matching.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Introduced in the Paper:

3DSSG

Used in the Paper:

3RScan

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3d scene graph generation 3DSSG SGPN Acc@50 87.55 # 4
Acc@100 90.66 # 3
mAcc@50 41.52 # 4
mAcc@100 51.92 # 4

Methods