3D Face Animation
21 papers with code • 3 benchmarks • 6 datasets
Image: Cudeiro et al
Datasets
Latest papers
LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example
To this end, we propose a method that can produce a highly stylized 3D face model with desired topology.
EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling
We propose EMAGE, a framework to generate full-body human gestures from audio and masked gestures, encompassing facial, local body, hands, and global movements.
FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
We propose a new latent diffusion model for this task, operating in the expression space of neural parametric head models, to synthesize audio-driven realistic head sequences.
FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion
In addition, majority of the approaches focus on 3D vertex based datasets and methods that are compatible with existing facial animation pipelines with rigged characters is scarce.
Speech-Driven 3D Face Animation with Composite and Regional Facial Movements
This paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech-driven 3D face animation.
SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces
To enhance the visual accuracy of generated lip movement while reducing the dependence on labeled data, we propose a novel framework SelfTalk, by involving self-supervision in a cross-modals network system to learn 3D talking faces.
Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation
This paper presents a novel approach for generating 3D talking heads from raw audio inputs.
EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels.
MMFace4D: A Large-Scale Multi-Modal 4D Face Dataset for Audio-Driven 3D Face Animation
Upon MMFace4D, we construct a non-autoregressive framework for audio-driven 3D face animation.
FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning
This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method that allows to capture personalized and subtle cues in speech (e. g. identity, emotion and hesitation).