Search Results for author: William A. P. Smith

Found 22 papers, 11 papers with code

Self-supervised Outdoor Scene Relighting

no code implementations ECCV 2020 Ye Yu, Abhimitra Meka, Mohamed Elgharib, Hans-Peter Seidel, Christian Theobalt, William A. P. Smith

Outdoor scene relighting is a challenging problem that requires good understanding of the scene geometry, illumination and albedo.

Outdoor inverse rendering from a single image using multiview self-supervision

1 code implementation12 Feb 2021 Ye Yu, William A. P. Smith

In this paper we show how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network.

Intrinsic Image Decomposition

Least squares surface reconstruction on arbitrary domains

1 code implementation ECCV 2020 Dizhong Zhu, William A. P. Smith

Almost universally in computer vision, when surface derivatives are required, they are computed using only first order accurate finite difference approximations.

Surface Reconstruction

A Morphable Face Albedo Model

1 code implementation CVPR 2020 William A. P. Smith, Alassane Seck, Hannah Dee, Bernard Tiddeman, Joshua Tenenbaum, Bernhard Egger

In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling.

Art Analysis Face Model

Towards a complete 3D morphable model of the human head

1 code implementation18 Nov 2019 Stylianos Ploumpis, Evangelos Ververas, Eimear O' Sullivan, Stylianos Moschoglou, Haoyang Wang, Nick Pears, William A. P. Smith, Baris Gecer, Stefanos Zafeiriou

Eye and eye region models are incorporated into the head model, along with basic models of the teeth, tongue and inner mouth cavity.

Face Model

Depth from a polarisation + RGB stereo pair

no code implementations CVPR 2019 Dizhong Zhu, William A. P. Smith

In this paper, we propose a hybrid depth imaging system in which a polarisation camera is augmented by a second image from a standard digital camera.

Combining 3D Morphable Models: A Large scale Face-and-Head Model

1 code implementation CVPR 2019 Stylianos Ploumpis, Haoyang Wang, Nick Pears, William A. P. Smith, Stefanos Zafeiriou

Three-dimensional Morphable Models (3DMMs) are powerful statistical tools for representing the 3D surfaces of an object class.

Decomposing multispectral face images into diffuse and specular shading and biophysical parameters

no code implementations18 Feb 2019 Sarah Alotaibi, William A. P. Smith

We propose a novel biophysical and dichromatic reflectance model that efficiently characterises spectral skin reflectance.

InverseRenderNet: Learning single image inverse rendering

1 code implementation CVPR 2019 Ye Yu, William A. P. Smith

By incorporating a differentiable renderer, our network can learn from self-supervision.

Statistical transformer networks: learning shape and appearance models via self supervision

no code implementations7 Apr 2018 Anil Bas, William A. P. Smith

In this configuration, our model learns an active appearance model and a means to fit the model from scratch with no supervision at all, even identity labels.

A 3D Morphable Model of Craniofacial Shape and Texture Variation

no code implementations ICCV 2017 Hang Dai, Nick Pears, William A. P. Smith, Christian Duncan

We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping.

Optical Flow Estimation

3D Morphable Models as Spatial Transformer Networks

1 code implementation23 Aug 2017 Anil Bas, Patrik Huber, William A. P. Smith, Muhammad Awais, Josef Kittler

In this paper, we show how a 3D Morphable Model (i. e. a statistical model of the 3D shape of a class of objects such as faces) can be used to spatially transform input data as a module (a 3DMM-STN) within a convolutional neural network.

What does 2D geometric information really tell us about 3D face shape?

1 code implementation22 Aug 2017 Anil Bas, William A. P. Smith

We show that this is not the case and that geometric information is an ambiguous cue.

3D Reconstruction

BRISKS: Binary Features for Spherical Images on a Geodesic Grid

no code implementations CVPR 2017 Hao Guan, William A. P. Smith

For interest point detection, we use a variant of the Accelerated Segment Test (AST) corner detector which operates on our geodesic grid.

Interest Point Detection

Ear-to-ear Capture of Facial Intrinsics

no code implementations8 Sep 2016 Alassane Seck, William A. P. Smith, Arnaud Dessein, Bernard Tiddeman, Hannah Dee, Abhishek Dutta

We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i. e. diffuse and specular albedo).

Face Model

Functional Faces: Groupwise Dense Correspondence Using Functional Maps

no code implementations CVPR 2016 Chao Zhang, William A. P. Smith, Arnaud Dessein, Nick Pears, Hang Dai

In this paper we present a method for computing dense correspondence between a set of 3D face meshes using functional maps.

Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences

1 code implementation2 Feb 2016 Anil Bas, William A. P. Smith, Timo Bolkart, Stefanie Wuhrer

We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting.

Example-Based Modeling of Facial Texture From Deficient Data

no code implementations ICCV 2015 Arnaud Dessein, William A. P. Smith, Richard C. Wilson, Edwin R. Hancock

We present an approach to modeling ear-to-ear, high-quality texture from one or more partial views of a face with possibly poor resolution and noise.

Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.