3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding

CVPR 2021  ยท  Shengheng Deng, Xun Xu, Chaozheng Wu, Ke Chen, Kui Jia ยท

The ability to understand the ways to interact with objects from visual cues, a.k.a. visual affordance, is essential to vision-guided robotic research. This involves categorizing, segmenting and reasoning of visual affordance. Relevant studies in 2D and 2.5D image domains have been made previously, however, a truly functional understanding of object affordance requires learning and prediction in the 3D physical domain, which is still absent in the community. In this work, we present a 3D AffordanceNet dataset, a benchmark of 23k shapes from 23 semantic object categories, annotated with 18 visual affordance categories. Based on this dataset, we provide three benchmarking tasks for evaluating visual affordance understanding, including full-shape, partial-view and rotation-invariant affordance estimations. Three state-of-the-art point cloud deep learning networks are evaluated on all tasks. In addition we also investigate a semi-supervised learning setup to explore the possibility to benefit from unlabeled data. Comprehensive results on our contributed dataset show the promise of visual affordance understanding as a valuable yet challenging benchmark.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Introduced in the Paper:

3D AffordanceNet

Used in the Paper:

CAD-120
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Affordance Detection 3D AffordanceNet DGCNN mAP 0.464 # 1
AIOU 0.178 # 1
Affordance Detection 3D AffordanceNet Partial View DGCNN mAP 0.422 # 1
AIOU 0.138 # 1
Affordance Detection 3D AffordanceNet Rotate SO(3) DGCNN mAP 0.373 # 1
AIOU 0.128 # 1
Affordance Detection 3D AffordanceNet Rotate z DGCNN mAP 0.448 # 1
AIOU 0.161 # 1

Methods


No methods listed for this paper. Add relevant methods here