WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose

20 May 2020  ·  Yijun Zhou, James Gregson ·

We present an end-to-end head-pose estimation network designed to predict Euler angles through the full range head yaws from a single RGB image. Existing methods perform well for frontal views but few target head pose from all viewpoints. This has applications in autonomous driving and retail. Our network builds on multi-loss approaches with changes to loss functions and training strategies adapted to wide range estimation. Additionally, we extract ground truth labelings of anterior views from a current panoptic dataset for the first time. The resulting Wide Headpose Estimation Network (WHENet) is the first fine-grained modern method applicable to the full-range of head yaws (hence wide) yet also meets or beats state-of-the-art methods for frontal head pose estimation. Our network is compact and efficient for mobile devices and applications.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Head Pose Estimation AFLW2000 WHENet MAE 5.42 # 19
Head Pose Estimation AFLW2000 WHENet-V MAE 4.83 # 14
Head Pose Estimation BIWI WHENet-V MAE (trained with other data) 3.48 # 3
Head Pose Estimation BIWI WHENet MAE (trained with other data) 3.81 # 7
Head Pose Estimation Panoptic WHENET Geodesic Error (GE) 24.38 # 6

Methods


No methods listed for this paper. Add relevant methods here