3D Human Pose and Shape Estimation via HybrIK-Transformer

9 Feb 2023  ·  Boris N. Oreshkin ·

HybrIK relies on a combination of analytical inverse kinematics and deep learning to produce more accurate 3D pose estimation from 2D monocular images. HybrIK has three major components: (1) pretrained convolution backbone, (2) deconvolution to lift 3D pose from 2D convolution features, (3) analytical inverse kinematics pass correcting deep learning prediction using learned distribution of plausible twist and swing angles. In this paper we propose an enhancement of the 2D to 3D lifting module, replacing deconvolution with Transformer, resulting in accuracy and computational efficiency improvement relative to the original HybrIK method. We demonstrate our results on commonly used H36M, PW3D, COCO and HP3D datasets. Our code is publicly available https://github.com/boreshkinai/hybrik-transformer.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW HybrIK-Transformer (HrNet-48) PA-MPJPE 42.3 # 21
MPJPE 71.6 # 25
MPVPE 83.6 # 22
3D Human Pose Estimation Human3.6M HybrIK-Transformer (HrNet-48) Average MPJPE (mm) 47.5 # 135
PA-MPJPE 29.5 # 5
3D Human Pose Estimation MPI-INF-3DHP HybrIK-Transformer (HrNet-48) AUC 48.9 # 45
MPJPE 86.2 # 41
PCK 88.6 # 31

Methods