DeepIM: Deep Iterative Matching for 6D Pose Estimation

ECCV 2018  ·  Yi Li, Gu Wang, Xiangyang Ji, Yu Xiang, Dieter Fox ·

Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the observed image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over state-of-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
6D Pose Estimation using RGB LineMOD PoseCNN + DeepIM Accuracy 97.5 # 5
Accuracy (ADD) 88.1% # 9
Mean ADD 88.6 # 10
6D Pose Estimation using RGB Occlusion LineMOD DeepIM (Train on Occlusion LineMOD) Mean ADD 55.5 # 4
6D Pose Estimation using RGB YCB-Video PoseCNN + DeepIM Mean ADD 70.1% # 1
Mean ADI 84.2 # 1
6D Pose Estimation using RGBD YCB-Video PoseCNN + DeepIM Mean ADD 80.6 # 2
Mean ADI 92.4 # 1

Methods


No methods listed for this paper. Add relevant methods here