Towards Accurate Facial Landmark Detection via Cascaded Transformers

Accurate facial landmarks are essential prerequisites for many tasks related to human faces. In this paper, an accurate facial landmark detector is proposed based on cascaded transformers. We formulate facial landmark detection as a coordinate regression task such that the model can be trained end-to-end. With self-attention in transformers, our model can inherently exploit the structured relationships between landmarks, which would benefit landmark detection under challenging conditions such as large pose and occlusion. During cascaded refinement, our model is able to extract the most relevant image features around the target landmark for coordinate prediction, based on deformable attention mechanism, thus bringing more accurate alignment. In addition, we propose a novel decoder that refines image features and landmark positions simultaneously. With few parameter increasing, the detection performance improves further. Our model achieves new state-of-the-art performance on several standard facial landmark detection benchmarks, and shows good generalization ability in cross-dataset evaluation.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Face Alignment 300W DTLD+ NME_inter-ocular (%, Full) 2.96 # 9
NME_inter-ocular (%, Common) 2.6 # 7
NME_inter-ocular (%, Challenge) 4.48 # 7
Face Alignment 300W Split 2 DTLD-s NME (box) 2.05 # 2
AUC@7 (box) 70.9 # 2
Face Alignment AFLW-19 DTLD+ NME_diag (%, Full) 1.37 # 6
Face Alignment COFW DTLD+ NME (inter-ocular) 3.02% # 2
Face Alignment WFLW DTLD+ NME (inter-ocular) 4.05 # 4
FR@10 (inter-ocular) 2.68 # 6

Methods


No methods listed for this paper. Add relevant methods here