Total capture: 3D human pose estimation fusing video and inertial sensors

We present an algorithm for fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data to accurately estimate 3D human pose. A 3-D convolutional neural network is used to learn a pose embedding from volumetric probabilistic visual hull data (PVH) derived from the MVV frames... We incorporate this model within a dual stream network integrating pose embeddings derived from MVV and a forward kinematic solve of the IMU data. A temporal model (LSTM) is incorporated within both streams prior to their fusion. Hybrid pose inference using these two complementary data sources is shown to resolve ambiguities within each sensor modality, yielding improved accuracy over prior methods. A further contribution of this work is a new hybrid MVV dataset (TotalCapture) comprising video, IMU and a skeletal joint ground truth derived from a commercial motion capture system. The dataset is available online at http://cvssp.org/data/totalcapture/ read more

PDF Abstract

Datasets


Introduced in the Paper:

TotalCapture

Used in the Paper:

Human3.6M
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Human Pose Estimation Human3.6M PVH-TSP Average MPJPE (mm) 57.0 # 43
3D Human Pose Estimation Total Capture IMUPVH Average MPJPE (mm) 70 # 7
3D Human Pose Estimation Total Capture PVH Average MPJPE (mm) 107 # 9

Methods


No methods listed for this paper. Add relevant methods here