Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation

The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to "instance-level" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce a Normalized Object Coordinate Space (NOCS)---a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new context-aware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Datasets


Introduced in the Paper:

REAL275

Used in the Paper:

MS COCO ShapeNet SUN RGB-D ShapeNetCore
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
6D Pose Estimation using RGBD CAMERA25 NOCS (128 bins) mAP 10, 10cm 62.2 # 2
mAP 10, 5cm 61.7 # 2
mAP 3DIou@25 91.4 # 2
mAP 3DIou@50 85.3 # 2
mAP 5, 5cm 38.8 # 2
6D Pose Estimation using RGBD REAL275 NOCS (128 bins) mAP 10, 10cm 26.7 # 4
mAP 10, 5cm 26.7 # 9
mAP 3DIou@25 84.9 # 4
mAP 3DIou@50 80.5 # 4
mAP 5, 5cm 9.5 # 11

Methods


No methods listed for this paper. Add relevant methods here