Single-view robot pose and joint angle estimation via render & compare

We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image. This is an important problem to grant mobile and itinerant autonomous systems the ability to interact with other robots using only visual information in non-instrumented environments, especially in the context of collaborative robotics. It is also challenging because robots have many degrees of freedom and an infinite space of possible configurations that often result in self-occlusions and depth ambiguities when imaged by a single camera. The contributions of this work are three-fold. First, we introduce a new render & compare approach for estimating the 6D pose and joint angles of an articulated robot that can be trained from synthetic data, generalizes to new unseen robot configurations at test time, and can be applied to a variety of robots. Second, we experimentally demonstrate the importance of the robot parametrization for the iterative pose updates and design a parametrization strategy that is independent of the robot structure. Finally, we show experimental results on existing benchmark datasets for four different robots and demonstrate that our method significantly outperforms the state of the art. Code and pre-trained models are available on the project webpage https://www.di.ens.fr/willow/research/robopose/.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robot Pose Estimation DREAM-dataset RoboPose (unknown-joint) AUC (avg. on 4 real DREAM datasets) 73.2 # 5
mean-ADD (avg. on 4 real DREAM datasets) 28.2 # 5
Robot Pose Estimation DREAM-dataset RoboPose (known-joint) AUC (avg. on 4 real DREAM datasets) 80.0 # 3
mean-ADD (avg. on 4 real DREAM datasets) 20.2 # 3

Methods


No methods listed for this paper. Add relevant methods here