Two-hand Global 3D Pose Estimation Using Monocular RGB

1 Jun 2020  ·  Fanqing Lin, Connor Wilhelm, Tony Martinez ·

We tackle the challenging task of estimating global 3D joint locations for both hands via only monocular RGB input images. We propose a novel multi-stage convolutional neural network based pipeline that accurately segments and locates the hands despite occlusion between two hands and complex background noise and estimates the 2D and 3D canonical joint locations without any depth information. Global joint locations with respect to the camera origin are computed using the hand pose estimations and the actual length of the key bone with a novel projection algorithm. To train the CNNs for this new task, we introduce a large-scale synthetic 3D hand pose dataset. We demonstrate that our system outperforms previous works on 3D canonical hand pose estimation benchmark datasets with RGB-only information. Additionally, we present the first work that achieves accurate global 3D hand tracking on both hands using RGB-only inputs and provide extensive quantitative and qualitative evaluation.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Canonical Hand Pose Estimation Ego3DHands AUC 0.681 # 1
3D Canonical Hand Pose Estimation RHP AUC 0.942 # 1
3D Canonical Hand Pose Estimation STB AUC 0.995 # 1

Methods


No methods listed for this paper. Add relevant methods here