3D Face Reconstruction
76 papers with code • 7 benchmarks • 11 datasets
3D Face Reconstruction is a computer vision task that involves creating a 3D model of a human face from a 2D image or a set of images. The goal of 3D face reconstruction is to reconstruct a digital 3D representation of a person's face, which can be used for various applications such as animation, virtual reality, and biometric identification.
( Image credit: 3DDFA_V2 )
Libraries
Use these libraries to find 3D Face Reconstruction models and implementationsDatasets
Latest papers with no code
3D Facial Expressions through Analysis-by-Neural-Synthesis
Instead, SMIRK replaces the differentiable rendering with a neural rendering module that, given the rendered predicted mesh geometry, and sparsely sampled pixels of the input image, generates a face image.
Monocular Identity-Conditioned Facial Reflectance Reconstruction
We first learn a high-quality prior for facial reflectance.
Skull-to-Face: Anatomy-Guided 3D Facial Reconstruction and Editing
Existing methods for automated facial reconstruction yield inaccurate results, suffering from the non-determinative nature of the problem that a skull with a sparse set of tissue depth cannot fully determine the skinned face.
VRMM: A Volumetric Relightable Morphable Head Model
In this paper, we introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling.
Exploring 3D-aware Lifespan Face Aging via Disentangled Shape-Texture Representations
Existing face aging methods often focus on modeling either texture aging or using an entangled shape-texture representation to achieve face aging.
MoSAR: Monocular Semi-Supervised Model for Avatar Reconstruction using Differentiable Shading
We also introduce a new dataset, named FFHQ-UV-Intrinsics, the first public dataset providing intrinsic face attributes at scale (diffuse, specular, ambient occlusion and translucency maps) for a total of 10k subjects.
Robust Geometry and Reflectance Disentanglement for 3D Face Reconstruction from Sparse-view Images
This paper presents a novel two-stage approach for reconstructing human faces from sparse-view images, a task made challenging by the unique geometry and complex skin reflectance of each individual.
FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models
This model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.
A Perceptual Shape Loss for Monocular 3D Face Reconstruction
In this work we propose a new loss function for monocular face capture, inspired by how humans would perceive the quality of a 3D face reconstruction given a particular image.
High-Quality 3D Face Reconstruction with Affine Convolutional Networks
In our method, an affine transformation matrix is learned from the affine convolution layer for each spatial location of the feature maps.