A Lightweight Graph Transformer Network for Human Mesh Reconstruction from 2D Human Pose

24 Nov 2021  ·  Ce Zheng, Matias Mendieta, Pu Wang, Aidong Lu, Chen Chen ·

Existing deep learning-based human mesh reconstruction approaches have a tendency to build larger networks in order to achieve higher accuracy. Computational complexity and model size are often neglected, despite being key characteristics for practical use of human mesh reconstruction models (e.g. virtual try-on systems). In this paper, we present GTRS, a lightweight pose-based method that can reconstruct human mesh from 2D human pose. We propose a pose analysis module that uses graph transformers to exploit structured and implicit joint correlations, and a mesh regression module that combines the extracted pose feature with the mesh template to reconstruct the final human mesh. We demonstrate the efficiency and generalization of GTRS by extensive evaluations on the Human3.6M and 3DPW datasets. In particular, GTRS achieves better accuracy than the SOTA pose-based method Pose2Mesh while only using 10.2% of the parameters (Params) and 2.5% of the FLOPs on the challenging in-the-wild 3DPW dataset. Code will be publicly available.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW GTRS PA-MPJPE 58.9 # 91
MPJPE 88.5 # 86
MPVPE 106.2 # 61
3D Human Pose Estimation Human3.6M GTRS Average MPJPE (mm) 64.3 # 265
PA-MPJPE 45.4 # 80

Methods


No methods listed for this paper. Add relevant methods here