3D Human Mesh Estimation from Virtual Markers

CVPR 2023  ยท  Xiaoxuan Ma, Jiajun Su, Chunyu Wang, Wentao Zhu, Yizhou Wang ยท

Inspired by the success of volumetric 3D pose estimation, some recent human mesh estimators propose to estimate 3D skeletons as intermediate representations, from which, the dense 3D meshes are regressed by exploiting the mesh topology. However, body shape information is lost in extracting skeletons, leading to mediocre performance. The advanced motion capture systems solve the problem by placing dense physical markers on the body surface, which allows to extract realistic meshes from their non-rigid motions. However, they cannot be applied to wild images without markers. In this work, we present an intermediate representation, named virtual markers, which learns 64 landmark keypoints on the body surface based on the large-scale mocap data in a generative style, mimicking the effects of physical markers. The virtual markers can be accurately detected from wild images and can reconstruct the intact meshes with realistic shapes by simple interpolation. Our approach outperforms the state-of-the-art methods on three datasets. In particular, it surpasses the existing methods by a notable margin on the SURREAL dataset, which has diverse body shapes. Code is available at https://github.com/ShirleyMaxx/VirtualMarker.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW VirtualMarker PA-MPJPE 41.3 # 16
MPJPE 67.5 # 15
MPVPE 77.9 # 10
3D Human Pose Estimation Human3.6M VirtualMarker Average MPJPE (mm) 47.3 # 132
PA-MPJPE 32 # 13
3D Human Pose Estimation Surreal VirtualMarker MPJPE 36.9 # 1
PA-MPJPE 28.9 # 1
PVE 44.7 # 2

Methods


No methods listed for this paper. Add relevant methods here