69 papers with code • 0 benchmarks • 2 datasets
Face generation is the task of generating (or interpolating) new faces from an existing dataset.
The state-of-the-art results for this task are located in the Image Generation parent.
These leaderboards are used to track progress in Face Generation
We propose a novel attributes encoder for extracting multi-level target face attributes, and a new generator with carefully designed Adaptive Attentional Denormalization (AAD) layers to adaptively integrate the identity and the attributes for face synthesis.
Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis.
In this work, we propose a novel framework, called InterFaceGAN, for semantic face editing by interpreting the latent semantics learned by GANs.
Speech is a rich biometric signal that contains information about the identity, gender and emotional state of the speaker.
Specifically, a new generator architecture is proposed to simultaneously transfer color/texture styles and transform local facial shapes into anime-like counterparts based on the style of a reference anime-face, while preserving the global structure of the source photo-face.