Face Generation
120 papers with code • 0 benchmarks • 4 datasets
Face generation is the task of generating (or interpolating) new faces from an existing dataset.
The state-of-the-art results for this task are located in the Image Generation parent.
( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )
Benchmarks
These leaderboards are used to track progress in Face Generation
Libraries
Use these libraries to find Face Generation models and implementationsSubtasks
Latest papers with no code
Towards Multi-domain Face Landmark Detection with Synthetic Data from Diffusion model
Finally, we fine-tuned a pre-trained face landmark detection model on the synthetic dataset to achieve multi-domain face landmark detection.
NeRF-AD: Neural Radiance Field with Attention-based Disentanglement for Talking Face Synthesis
However, most existing NeRF-based methods either burden NeRF with complex learning tasks while lacking methods for supervised multimodal feature fusion, or cannot precisely map audio to the facial region related to speech movements.
EmoTalker: Emotionally Editable Talking Face Generation via Diffusion Model
In recent years, the field of talking faces generation has attracted considerable attention, with certain methods adept at generating virtual faces that convincingly imitate human expressions.
Detecting Face Synthesis Using a Concealed Fusion Model
In this paper, we propose a fusion-based strategy to detect face image synthesis while providing resiliency to several attacks.
Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation
We devise a novel diffusion model that can undertake the task of simultaneously face swapping and reenactment.
EFHQ: Multi-purpose ExtremePose-Face-HQ dataset
The existing facial datasets, while having plentiful images at near frontal views, lack images with extreme head poses, leading to the downgraded performance of deep learning models when dealing with profile or pitched faces.
Towards Flexible, Scalable, and Adaptive Multi-Modal Conditioned Face Synthesis
Recent progress in multi-modal conditioned face synthesis has enabled the creation of visually striking and accurately aligned facial images.
DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation
The generation of emotional talking faces from a single portrait image remains a significant challenge.
Gaussian Harmony: Attaining Fairness in Diffusion-based Face Generation Models
We mitigate the bias by localizing the means of the facial attributes in the latent space of the diffusion model using Gaussian mixture models (GMM).
AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis
Audio-driven talking head synthesis is a promising topic with wide applications in digital human, film making and virtual reality.