Outdoor scene relighting is a challenging problem that requires good understanding of the scene geometry, illumination and albedo.
In this paper we show how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network.
Almost universally in computer vision, when surface derivatives are required, they are computed using only first order accurate finite difference approximations.
In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling.
Eye and eye region models are incorporated into the head model, along with basic models of the teeth, tongue and inner mouth cavity.
1 code implementation • 3 Sep 2019 • Bernhard Egger, William A. P. Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhoefer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, Christian Theobalt, Volker Blanz, Thomas Vetter
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed.
Three-dimensional Morphable Models (3DMMs) are powerful statistical tools for representing the 3D surfaces of an object class.
We propose a novel biophysical and dichromatic reflectance model that efficiently characterises spectral skin reflectance.
In this configuration, our model learns an active appearance model and a means to fit the model from scratch with no supervision at all, even identity labels.
We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping.
From a numerical point of view, we use a least-squares formulation of the discrete version of the problem.
In this paper, we show how a 3D Morphable Model (i. e. a statistical model of the 3D shape of a class of objects such as faces) can be used to spatially transform input data as a module (a 3DMM-STN) within a convolutional neural network.
We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i. e. diffuse and specular albedo).
In this paper we present a method for computing dense correspondence between a set of 3D face meshes using functional maps.
We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting.
We present an approach to modeling ear-to-ear, high-quality texture from one or more partial views of a face with possibly poor resolution and noise.