MaskedFusion: Mask-based 6D Object Pose Estimation

18 Nov 2019  ·  Nuno Pereira, Luís A. Alexandre ·

MaskedFusion is a framework to estimate the 6D pose of objects using RGB-D data, with an architecture that leverages multiple sub-tasks in a pipeline to achieve accurate 6D poses. 6D pose estimation is an open challenge due to complex world objects and many possible problems when capturing data from the real world, e.g., occlusions, truncations, and noise in the data. Achieving accurate 6D poses will improve results in other open problems like robot grasping or positioning objects in augmented reality. MaskedFusion improves the state-of-the-art by using object masks to eliminate non-relevant data. With the inclusion of the masks on the neural network that estimates the 6D pose of an object we also have features that represent the object shape. MaskedFusion is a modular pipeline where each sub-task can have different methods that achieve the objective. MaskedFusion achieved 97.3% on average using the ADD metric on the LineMOD dataset and 93.3% using the ADD-S AUC metric on YCB-Video Dataset, which is an improvement, compared to the state-of-the-art methods. The code is available on GitHub (https://github.com/kroglice/MaskedFusion).

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
6D Pose Estimation using RGBD LineMOD MaskedFusion Mean ADD 97.8 # 4
6D Pose Estimation LineMOD MaskedFusion Accuracy (ADD) 97.8 # 3
6D Pose Estimation using RGBD YCB-Video MaskedFusion Mean ADD 93.3 # 2
Mean ADD-S 93.3 # 3
6D Pose Estimation YCB-Video MaskedFusion ADDS AUC 93.3 # 7

Methods


No methods listed for this paper. Add relevant methods here