WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion

12 Dec 2023  ·  Soyong Shin, Juyong Kim, Eni Halilaj, Michael J. Black ·

The estimation of 3D human motion from video has progressed rapidly but current methods still have several key limitations. First, most methods estimate the human in camera coordinates. Second, prior work on estimating humans in global coordinates often assumes a flat ground plane and produces foot sliding. Third, the most accurate methods rely on computationally expensive optimization pipelines, limiting their use to offline applications. Finally, existing video-based methods are surprisingly less accurate than single-frame methods. We address these limitations with WHAM (World-grounded Humans with Accurate Motion), which accurately and efficiently reconstructs 3D human motion in a global coordinate system from video. WHAM learns to lift 2D keypoint sequences to 3D using motion capture data and fuses this with video features, integrating motion context and visual information. WHAM exploits camera angular velocity estimated from a SLAM method together with human motion to estimate the body's global trajectory. We combine this with a contact-aware trajectory refinement method that lets WHAM capture human motion in diverse conditions, such as climbing stairs. WHAM outperforms all existing 3D human motion recovery methods across multiple in-the-wild benchmarks. Code will be available for research purposes at http://wham.is.tue.mpg.de/

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW WHAM (ViT) PA-MPJPE 35.9 # 1
MPJPE 57.8 # 1
MPVPE 68.7 # 1
3D Human Pose Estimation EMDB WHAM (ViT) Average MPJPE (mm) 79.7 # 1
Average MPJPE-PA (mm) 50.4 # 1
Average MVE (mm) 94.4 # 1
3D Human Pose Estimation RICH WHAM (ViT) MPJPE 80 # 2
PA-MPJPE 44.3 # 1
MPVPE 91.2 # 2

Methods


No methods listed for this paper. Add relevant methods here