Fine-Grained Head Pose Estimation Without Keypoints

2 Oct 2017  ·  Nataniel Ruiz, Eunji Chong, James M. Rehg ·

Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment. Traditionally head pose is computed by estimating some keypoints from the target face and solving the 2D to 3D correspondence problem with a mean human head model. We argue that this is a fragile method because it relies entirely on landmark detection performance, the extraneous head model and an ad-hoc fitting step. We present an elegant and robust way to determine pose by training a multi-loss convolutional neural network on 300W-LP, a large synthetically expanded dataset, to predict intrinsic Euler angles (yaw, pitch and roll) directly from image intensities through joint binned pose classification and regression. We present empirical tests on common in-the-wild pose benchmark datasets which show state-of-the-art results. Additionally we test our method on a dataset usually used for pose estimation using depth and start to close the gap with state-of-the-art depth pose methods. We open-source our training and testing code as well as release our pre-trained models.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Head Pose Estimation AFLW Ruiz et al. MAE 5.324 # 5
Head Pose Estimation AFLW2000 Hopenet MAE 6.15 # 20
Geodesic Error (GE) 9,93 # 5
Head Pose Estimation AFLW2000 Multi-Loss ResNet50 (a=2) MAE 6.155 # 21
Head Pose Estimation BIWI hopenet MAE (trained with other data) 4.89 # 17
Geodesic Error (GE) 9.53 # 5
MAE-aligned (trained with other data) 3.48 # 5
Geodesic Error - aligned (GE) 6.6 # 5
Head Pose Estimation BIWI Multi-Loss ResNet50 MAE (trained with BIWI data) 4.895 # 9

Methods