Subpixel Heatmap Regression for Facial Landmark Localization

3 Nov 2021  ·  Adrian Bulat, Enrique Sanchez, Georgios Tzimiropoulos ·

Deep Learning models based on heatmap regression have revolutionized the task of facial landmark localization with existing models working robustly under large poses, non-uniform illumination and shadows, occlusions and self-occlusions, low resolution and blur. However, despite their wide adoption, heatmap regression approaches suffer from discretization-induced errors related to both the heatmap encoding and decoding process. In this work we show that these errors have a surprisingly large negative impact on facial alignment accuracy. To alleviate this problem, we propose a new approach for the heatmap encoding and decoding process by leveraging the underlying continuous distribution. To take full advantage of the newly proposed encoding-decoding mechanism, we also introduce a Siamese-based training that enforces heatmap consistency across various geometric image transformations. Our approach offers noticeable gains across multiple datasets setting a new state-of-the-art result in facial landmark localization. Code alongside the pretrained models will be made available at https://www.adrianbulat.com/face-alignment

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Face Alignment 300W SHR-FAN NME_inter-ocular (%, Full) 2.94 # 4
NME_inter-ocular (%, Common) 2.61 # 5
NME_inter-ocular (%, Challenge) 4.13 # 1
Face Alignment 300W Split 2 SH-FAN NME (inter-ocular) 2.94 # 1
Face Alignment AFLW-19 SHR-FAN NME_diag (%, Full) 1.31 # 4
NME_diag (%, Frontal) 1.12 # 3
NME_box (%, Full) 2.14 # 5
AUC_box@0.07 (%, Full) 70.0 # 3
Face Alignment COFW SH-FAN Mean Error Rate 3.02% # 1
Face Alignment COFW-68 SH-FAN NME (box) 2.47 # 1
AUC@7 (box) 64.9 # 1
Face Alignment WFLW SH-FAN NME_inter-ocular (%, all) 3.72 # 1
AUC_inter-ocular@0.1 (%, all) 63.1 # 1
FR_inter-ocular@0.1(%, all) 1.55 # 1

Methods