When Liebig's Barrel Meets Facial Landmark Detection: A Practical Model

27 May 2021  ·  Haibo Jin, Jinpeng Li, Shengcai Liao, Ling Shao ·

In recent years, significant progress has been made in the research of facial landmark detection. However, few prior works have thoroughly discussed about models for practical applications. Instead, they often focus on improving a couple of issues at a time while ignoring the others. To bridge this gap, we aim to explore a practical model that is accurate, robust, efficient, generalizable, and end-to-end trainable at the same time. To this end, we first propose a baseline model equipped with one transformer decoder as detection head. In order to achieve a better accuracy, we further propose two lightweight modules, namely dynamic query initialization (DQInit) and query-aware memory (QAMem). Specifically, DQInit dynamically initializes the queries of decoder from the inputs, enabling the model to achieve as good accuracy as the ones with multiple decoder layers. QAMem is designed to enhance the discriminative ability of queries on low-resolution feature maps by assigning separate memory values to each query rather than a shared one. With the help of QAMem, our model removes the dependence on high-resolution feature maps and is still able to obtain superior accuracy. Extensive experiments and analysis on three popular benchmarks show the effectiveness and practical advantages of the proposed model. Notably, our model achieves new state of the art on WFLW as well as competitive results on 300W and COFW, while still running at 50+ FPS.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Face Alignment 300W BarrelNet (ResNet-101) NME_inter-ocular (%, Full) 3.09 # 12
NME_inter-ocular (%, Common) 2.73 # 14
NME_inter-ocular (%, Challenge) 4.6 # 10
Face Alignment COFW BarrelNet (ResNet-101) NME (inter-ocular) 3.1% # 5
Face Alignment WFLW BarrelNet (ResNet-101) NME (inter-ocular) 4.2 # 12

Methods


No methods listed for this paper. Add relevant methods here