The primary aim of this work is to demonstrate that information preserved by landmarks (gender in particular) can be further accentuated by leveraging generative models to synthesize corresponding faces.
Face normalization provides an effective and cheap way to distil face identity and dispel face variances for recognition.
The availability of large-scale facial databases, together with the remarkable progresses of deep learning technologies, in particular Generative Adversarial Networks (GANs), have led to the generation of extremely realistic fake facial content, raising obvious concerns about the potential for misuse.
The free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news.
In this work, we propose a framework called InterFaceGAN to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space.
Although existing models can generate realistic target images, it's difficult to maintain the structure of the source image.
Specifically, we first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles.
In addition, the lack of high-quality paired data remains an obstacle for both methods.
The capacity to recognize faces under varied poses is a fundamental human ability that presents a unique challenge for computer vision systems.
In this paper, we present the Surrey Face Model, a multi-resolution 3D Morphable Model that we make available to the public for non-commercial purposes.