Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D

NeurIPS 2020  ยท  Ankit Goyal, Kaiyu Yang, Dawei Yang, Jia Deng ยท

Understanding spatial relations (e.g., "laptop on table") in visual input is important for both humans and robots. Existing datasets are insufficient as they lack large-scale, high-quality 3D ground truth information, which is critical for learning spatial relations. In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D. Rel3D enables quantifying the effectiveness of 3D information in predicting spatial relations on large-scale human data. Moreover, we propose minimally contrastive data collection -- a novel crowdsourcing method for reducing dataset bias. The 3D scenes in our dataset come in minimally contrastive pairs: two scenes in a pair are almost identical, but a spatial relation holds in one and fails in the other. We empirically validate that minimally contrastive examples can diagnose issues with current relation detection models as well as lead to sample-efficient training. Code and data are available at https://github.com/princeton-vl/Rel3D.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract

Datasets


Introduced in the Paper:

Rel3D

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Spatial Relation Recognition Rel3D Human Acc 94.25 # 1
Spatial Relation Recognition Rel3D MLP-Aligned Features Acc 85.03 # 2
Spatial Relation Recognition Rel3D MLP-Raw Features Acc 81.24 # 3
Spatial Relation Recognition Rel3D PPR-FCN Acc 73.3 # 5
Spatial Relation Recognition Rel3D VTransE Acc 72.27 # 8
Spatial Relation Recognition Rel3D VipCNN Acc 72.32 # 7
Spatial Relation Recognition Rel3D DRNet Acc 73.25 # 6
Spatial Relation Recognition Rel3D BBox Only Acc 74.14 # 4
Spatial Relation Recognition Rel3D Random Acc 50 # 9

Methods


No methods listed for this paper. Add relevant methods here