Face Generation
118 papers with code • 0 benchmarks • 4 datasets
Face generation is the task of generating (or interpolating) new faces from an existing dataset.
The state-of-the-art results for this task are located in the Image Generation parent.
( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )
Benchmarks
These leaderboards are used to track progress in Face Generation
Libraries
Use these libraries to find Face Generation models and implementationsSubtasks
Latest papers
Parameter and Data-Efficient Spectral StyleDCGAN
We present a simple, highly parameter, and data-efficient adversarial network for unconditional face generation.
Deepfake Generation and Detection: A Benchmark and Survey
In addition to the advancements in deepfake generation, corresponding detection technologies need to continuously evolve to regulate the potential misuse of deepfakes, such as for privacy invasion and phishing attacks.
LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example
To this end, we propose a method that can produce a highly stylized 3D face model with desired topology.
Towards Controllable Face Generation with Semantic Latent Diffusion Models
To address that, in this paper we propose a SIS framework based on a novel Latent Diffusion Model architecture for human face generation and editing that is both able to reproduce and manipulate a real reference image and generate diversity-driven results.
Arc2Face: A Foundation Model of Human Faces
This paper presents Arc2Face, an identity-conditioned face foundation model, which, given the ArcFace embedding of a person, can generate diverse photo-realistic images with an unparalleled degree of face similarity than existing models.
Fast Text-to-3D-Aware Face Generation and Manipulation via Direct Cross-modal Mapping and Geometric Regularization
Text-to-3D-aware face (T3D Face) generation and manipulation is an emerging research hot spot in machine learning, which still suffers from low efficiency and poor quality.
Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis
One-shot 3D talking portrait generation aims to reconstruct a 3D avatar from an unseen image, and then animate it with a reference video or audio to generate a talking portrait video.
Controllable 3D Face Generation with Conditional Style Code Diffusion
For 3D GAN inversion, we introduce two methods which aim to enhance the representation of style codes and alleviate 3D inconsistencies.
Cross-Age Contrastive Learning for Age-Invariant Face Recognition
Cross-age facial images are typically challenging and expensive to collect, making noise-free age-oriented datasets relatively small compared to widely-used large-scale facial datasets.
Neural Text to Articulate Talk: Deep Text to Audiovisual Speech Synthesis achieving both Auditory and Photo-realism
Our method, which we call NEUral Text to ARticulate Talk (NEUTART), is a talking face generator that uses a joint audiovisual feature space, as well as speech-informed 3D facial reconstructions and a lip-reading loss for visual supervision.