Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Estimating 3D hand and object pose from a single image is an extremely challenging problem: hands and objects are often self-occluded during interactions, and the 3D annotations are scarce as even humans cannot directly label the ground-truths from a single image perfectly. To tackle these challenges, we propose a unified framework for estimating the 3D hand and object poses with semi-supervised learning. We build a joint learning framework where we perform explicit contextual reasoning between hand and object representations by a Transformer. Going beyond limited 3D annotations in a single image, we leverage the spatial-temporal consistency in large-scale hand-object videos as a constraint for generating pseudo labels in semi-supervised learning. Our method not only improves hand pose estimation in challenging real-world dataset, but also substantially improve the object pose which has fewer ground-truths per instance. By training with large-scale diverse videos, our model also generalizes better across multiple out-of-domain datasets. Project page and code: https://stevenlsw.github.io/Semi-Hand-Object

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Hand Pose Estimation DexYCB SHO Average MPJPE (mm) 15.2 # 8
Procrustes-Aligned MPJPE 6.58 # 8
MPVPE - # 8
VAUC - # 6
PA-MPVPE - # 8
PA-VAUC - # 6
hand-object pose HO-3D SHO Average MPJPE (mm) - # 6
ST-MPJPE 31.7 # 7
PA-MPJPE 10.1 # 3
OME - # 7
ADD-S - # 7
3D Hand Pose Estimation HO-3D SHO Average MPJPE (mm) - # 10
ST-MPJPE (mm) 31.7 # 14
PA-MPJPE (mm) 10.1 # 7

Methods