Face Generation
120 papers with code • 0 benchmarks • 4 datasets
Face generation is the task of generating (or interpolating) new faces from an existing dataset.
The state-of-the-art results for this task are located in the Image Generation parent.
( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )
Benchmarks
These leaderboards are used to track progress in Face Generation
Libraries
Use these libraries to find Face Generation models and implementationsSubtasks
Latest papers with no code
DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation
The generation of emotional talking faces from a single portrait image remains a significant challenge.
Gaussian Harmony: Attaining Fairness in Diffusion-based Face Generation Models
We mitigate the bias by localizing the means of the facial attributes in the latent space of the diffusion model using Gaussian mixture models (GMM).
AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis
Audio-driven talking head synthesis is a promising topic with wide applications in digital human, film making and virtual reality.
VectorTalker: SVG Talking Face Generation with Progressive Vectorisation
To address these, we propose a novel scalable vector graphic reconstruction and animation method, dubbed VectorTalker.
High-Fidelity Face Swapping with Style Blending
Face swapping has gained significant traction, driven by the plethora of human face synthesis facilitated by deep learning methods.
GSmoothFace: Generalized Smooth Talking Face Generation via Fine Grained 3D Face Guidance
Our proposed GSmoothFace model mainly consists of the Audio to Expression Prediction (A2EP) module and the Target Adaptive Face Translation (TAFT) module.
FT2TF: First-Person Statement Text-To-Talking Face Generation
This achievement highlights our model capability to bridge first-person statements and dynamic face generation, providing insightful guidance for future work.
Retrieving Conditions from Reference Images for Diffusion Models
Newly developed diffusion-based techniques have showcased phenomenal abilities in producing a wide range of high-quality images, sparking considerable interest in various applications.
Text-Guided 3D Face Synthesis -- From Generation to Editing
In the editing stage, we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts.
Seeing through the Mask: Multi-task Generative Mask Decoupling Face Recognition
Therefore, this paper proposes a Multi-task gEnerative mask dEcoupling face Recognition (MEER) network to jointly handle these two tasks, which can learn occlusionirrelevant and identity-related representation while achieving unmasked face synthesis.