Distill Knowledge from NRSfM for Weakly Supervised 3D Pose Learning

ICCV 2019  ·  Chaoyang Wang, Chen Kong, Simon Lucey ·

We propose to learn a 3D pose estimator by distilling knowledge from Non-Rigid Structure from Motion (NRSfM). Our method uses solely 2D landmark annotations. No 3D data, multi-view/temporal footage, or object specific prior is required. This alleviates the data bottleneck, which is one of the major concern for supervised methods. The challenge for using NRSfM as teacher is that they often make poor depth reconstruction when the 2D projections have strong ambiguity. Directly using those wrong depth as hard target would negatively impact the student. Instead, we propose a novel loss that ties depth prediction to the cost function used in NRSfM. This gives the student pose estimator freedom to reduce depth error by associating with image features. Validated on H3.6M dataset, our learned 3D pose estimation network achieves more accurate reconstruction compared to NRSfM methods. It also outperforms other weakly supervised methods, in spite of using significantly less supervision.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Weakly-supervised 3D Human Pose Estimation Human3.6M Wang et al. Average MPJPE (mm) 83.0 # 21
3D Annotations No # 1


No methods listed for this paper. Add relevant methods here