Occlusion-Robust Object Pose Estimation with Holistic Representation

22 Oct 2021  ·  Bo Chen, Tat-Jun Chin, Marius Klimavicius ·

Practical object pose estimation demands robustness against occlusions to the target object. State-of-the-art (SOTA) object pose estimators take a two-stage approach, where the first stage predicts 2D landmarks using a deep network and the second stage solves for 6DOF pose from 2D-3D correspondences. Albeit widely adopted, such two-stage approaches could suffer from novel occlusions when generalising and weak landmark coherence due to disrupted features. To address these issues, we develop a novel occlude-and-blackout batch augmentation technique to learn occlusion-robust deep features, and a multi-precision supervision architecture to encourage holistic pose representation learning for accurate and coherent landmark predictions. We perform careful ablation tests to verify the impact of our innovations and compare our method to SOTA pose estimators. Without the need of any post-processing or refinement, our method exhibits superior performance on the LINEMOD dataset. On the YCB-Video dataset our method outperforms all non-refinement methods in terms of the ADD(-S) metric. We also demonstrate the high data-efficiency of our method. Our code is available at http://github.com/BoChenYS/ROPE

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
6D Pose Estimation using RGB LineMOD ROPE Accuracy (ADD) 95.61% # 5
Mean ADD 95.61 # 7
6D Pose Estimation using RGB Occlusion LineMOD ROPE Mean ADD 45.95 # 10
6D Pose Estimation using RGB YCB-Video ROPE Mean AUC 79.88 # 1
Mean ADD 66.59 # 2

Methods


No methods listed for this paper. Add relevant methods here