Face Generation

120 papers with code • 0 benchmarks • 4 datasets

Face generation is the task of generating (or interpolating) new faces from an existing dataset.

The state-of-the-art results for this task are located in the Image Generation parent.

( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )

Libraries

Use these libraries to find Face Generation models and implementations

Latest papers with no code

Towards Multi-domain Face Landmark Detection with Synthetic Data from Diffusion model

no code yet • 24 Jan 2024

Finally, we fine-tuned a pre-trained face landmark detection model on the synthetic dataset to achieve multi-domain face landmark detection.

NeRF-AD: Neural Radiance Field with Attention-based Disentanglement for Talking Face Synthesis

no code yet • 23 Jan 2024

However, most existing NeRF-based methods either burden NeRF with complex learning tasks while lacking methods for supervised multimodal feature fusion, or cannot precisely map audio to the facial region related to speech movements.

EmoTalker: Emotionally Editable Talking Face Generation via Diffusion Model

no code yet • 16 Jan 2024

In recent years, the field of talking faces generation has attracted considerable attention, with certain methods adept at generating virtual faces that convincingly imitate human expressions.

Detecting Face Synthesis Using a Concealed Fusion Model

no code yet • 8 Jan 2024

In this paper, we propose a fusion-based strategy to detect face image synthesis while providing resiliency to several attacks.

Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation

no code yet • 2 Jan 2024

We devise a novel diffusion model that can undertake the task of simultaneously face swapping and reenactment.

EFHQ: Multi-purpose ExtremePose-Face-HQ dataset

no code yet • 28 Dec 2023

The existing facial datasets, while having plentiful images at near frontal views, lack images with extreme head poses, leading to the downgraded performance of deep learning models when dealing with profile or pitched faces.

Towards Flexible, Scalable, and Adaptive Multi-Modal Conditioned Face Synthesis

no code yet • 26 Dec 2023

Recent progress in multi-modal conditioned face synthesis has enabled the creation of visually striking and accurately aligned facial images.

DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation

no code yet • 21 Dec 2023

The generation of emotional talking faces from a single portrait image remains a significant challenge.

Gaussian Harmony: Attaining Fairness in Diffusion-based Face Generation Models

no code yet • 21 Dec 2023

We mitigate the bias by localizing the means of the facial attributes in the latent space of the diffusion model using Gaussian mixture models (GMM).

AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis

no code yet • 18 Dec 2023

Audio-driven talking head synthesis is a promising topic with wide applications in digital human, film making and virtual reality.