Face Generation

120 papers with code • 0 benchmarks • 4 datasets

Face generation is the task of generating (or interpolating) new faces from an existing dataset.

The state-of-the-art results for this task are located in the Image Generation parent.

( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )

Libraries

Use these libraries to find Face Generation models and implementations

Latest papers with no code

DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation

no code yet • 21 Dec 2023

The generation of emotional talking faces from a single portrait image remains a significant challenge.

Gaussian Harmony: Attaining Fairness in Diffusion-based Face Generation Models

no code yet • 21 Dec 2023

We mitigate the bias by localizing the means of the facial attributes in the latent space of the diffusion model using Gaussian mixture models (GMM).

AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis

no code yet • 18 Dec 2023

Audio-driven talking head synthesis is a promising topic with wide applications in digital human, film making and virtual reality.

VectorTalker: SVG Talking Face Generation with Progressive Vectorisation

no code yet • 18 Dec 2023

To address these, we propose a novel scalable vector graphic reconstruction and animation method, dubbed VectorTalker.

High-Fidelity Face Swapping with Style Blending

no code yet • 17 Dec 2023

Face swapping has gained significant traction, driven by the plethora of human face synthesis facilitated by deep learning methods.

GSmoothFace: Generalized Smooth Talking Face Generation via Fine Grained 3D Face Guidance

no code yet • 12 Dec 2023

Our proposed GSmoothFace model mainly consists of the Audio to Expression Prediction (A2EP) module and the Target Adaptive Face Translation (TAFT) module.

FT2TF: First-Person Statement Text-To-Talking Face Generation

no code yet • 9 Dec 2023

This achievement highlights our model capability to bridge first-person statements and dynamic face generation, providing insightful guidance for future work.

Retrieving Conditions from Reference Images for Diffusion Models

no code yet • 5 Dec 2023

Newly developed diffusion-based techniques have showcased phenomenal abilities in producing a wide range of high-quality images, sparking considerable interest in various applications.

Text-Guided 3D Face Synthesis -- From Generation to Editing

no code yet • 1 Dec 2023

In the editing stage, we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts.

Seeing through the Mask: Multi-task Generative Mask Decoupling Face Recognition

no code yet • 20 Nov 2023

Therefore, this paper proposes a Multi-task gEnerative mask dEcoupling face Recognition (MEER) network to jointly handle these two tasks, which can learn occlusionirrelevant and identity-related representation while achieving unmasked face synthesis.