Adversarial Learning Semantic Volume for 2D/3D Face Shape Regression in the Wild

Regression-based methods have revolutionized 2D landmark localization with the exploitation of deep neural networks and massive annotated datasets in the wild. However, it remains challenging for 3D landmark localization due to the lack of annotated datasets and the ambiguous nature of landmarks under the 3D perspective. This paper revisits regression-based methods and proposes an adversarial voxel and coordinate regression framework for 2D and 3D facial landmark localization in real-world scenarios. First, a semantic volumetric representation is introduced to encode the per-voxel likelihood of positions being the 3D landmarks. Then, an end-to-end pipeline is designed to jointly regress the proposed volumetric representation and the coordinate vector. Such a pipeline not only enhances the robustness and accuracy of the predictions but also unifies the 2D and 3D landmark localization so that the 2D and 3D datasets could be utilized simultaneously. Further, an adversarial learning strategy is exploited to distill 3D structure learned from synthetic datasets to real-world datasets under weakly supervised settings, where an auxiliary regression discriminator is proposed to encourage the network to produce plausible predictions for both the synthetic and real-world images. The effectiveness of our method is validated on benchmark datasets 3DFAW and AFLW2000-3D for both 2D and 3D facial landmark localization tasks. The experimental results show that the proposed method achieves significant improvements over the previous state-of-the-art methods.

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Face Alignment AFLW2000-3D JVCR Mean NME 3.31% # 3

Methods


No methods listed for this paper. Add relevant methods here