THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers

We present THUNDR, a transformer-based deep neural network methodology to reconstruct the 3d pose and shape of people, given monocular RGB images. Key to our methodology is an intermediate 3d marker representation, where we aim to combine the predictive power of model-free-output architectures and the regularizing, anthropometrically-preserving properties of a statistical human surface model like GHUM -- a recently introduced, expressive full body statistical 3d human model, trained end-to-end. Our novel transformer-based prediction pipeline can focus on image regions relevant to the task, supports self-supervised regimes, and ensures that solutions are consistent with human anthropometry. We show state-of-the-art results on Human3.6M and 3DPW, for both the fully-supervised and the self-supervised models, for the task of inferring 3d human shape, joint positions, and global translation. Moreover, we observe very solid 3d reconstruction performance for difficult human poses collected in the wild.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW THUNDR PA-MPJPE 51.5 # 61
MPJPE 74.8 # 40
3D Human Pose Estimation 3DPW THUNDR (WS) PA-MPJPE 59.9 # 95
MPJPE 86.8 # 81
3D Human Pose Estimation Human3.6M THUNDR Average MPJPE (mm) 55 # 219
PA-MPJPE 39.8 # 52
3D Human Pose Estimation Human3.6M THUNDR (WS) Average MPJPE (mm) 87 # 298
PA-MPJPE 62.2 # 106

Methods


No methods listed for this paper. Add relevant methods here