Face Generation
114 papers with code • 1 benchmarks • 5 datasets
Face generation is the task of generating (or interpolating) new faces from an existing dataset.
The state-of-the-art results for this task are located in the Image Generation parent.
( Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation )
Libraries
Use these libraries to find Face Generation models and implementationsSubtasks
Latest papers
Head Rotation in Denoising Diffusion Models
Denoising Diffusion Models (DDM) are emerging as the cutting-edge technology in the realm of deep generative modeling, challenging the dominance of Generative Adversarial Networks.
Fast refacing of MR images with a generative neural network lowers re-identification risk and preserves volumetric consistency
To evaluate the performance of the proposed de-identification tool, a comparative study was conducted between several existing defacing and refacing tools, with two different segmentation algorithms (FAST and Morphobox).
Identity-Preserving Talking Face Generation with Landmark and Appearance Priors
Prior landmark characteristics of the speaker's face are employed to make the generated landmarks coincide with the facial outline of the speaker.
Laughing Matters: Introducing Laughing-Face Generation using Diffusion Models
Speech-driven animation has gained significant traction in recent years, with current methods achieving near-photorealistic results.
High-Fidelity 3D Face Generation from Natural Language Descriptions
Synthesizing high-quality 3D face models from natural language descriptions is very valuable for many applications, including avatar creation, virtual reality, and telepresence.
Collaborative Diffusion for Multi-Modal Face Generation and Editing
In this work, we present Collaborative Diffusion, where pre-trained uni-modal diffusion models collaborate to achieve multi-modal face generation and editing without re-training.
DCFace: Synthetic Face Generation with Dual Condition Diffusion Model
Our novel Patch-wise style extractor and Time-step dependent ID loss enables DCFace to consistently produce face images of the same subject under different styles with precise control.
Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert
To address the problem, we propose using a lip-reading expert to improve the intelligibility of the generated lip regions by penalizing the incorrect generation results.
Emotionally Enhanced Talking Face Generation
To mitigate this, we build a talking face generation framework conditioned on a categorical emotion to generate videos with appropriate expressions, making them more realistic and convincing.
High Fidelity Synthetic Face Generation for Rosacea Skin Condition from Limited Data
In this study, for the first time, a small dataset of Rosacea with 300 full-face images is utilized to further investigate the possibility of generating synthetic data.