Search Results for author: William A. P. Smith

Found 28 papers, 13 papers with code

RENI++ A Rotation-Equivariant, Scale-Invariant, Natural Illumination Prior

1 code implementation15 Nov 2023 James A. D. Gardner, Bernhard Egger, William A. P. Smith

Training our model on a curated dataset of 1. 6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations.

Depth Estimation Inverse Rendering

ID2image: Leakage of non-ID information into face descriptors and inversion from descriptors to images

no code implementations15 Apr 2023 Mingrui Li, William A. P. Smith, Patrik Huber

Information about the environment (such as background and lighting) or changeable aspects of the face (such as pose, expression, presence of glasses, hat etc.)

Face Recognition

If At First You Don't Succeed: Test Time Re-ranking for Zero-shot, Cross-domain Retrieval

no code implementations30 Mar 2023 Finlay G. C. Hudson, William A. P. Smith

In this paper we propose a novel method for zero-shot, cross-domain image retrieval in which we make two key contributions.

Knowledge Distillation Re-Ranking +2

Neural apparent BRDF fields for multiview photometric stereo

no code implementations14 Jul 2022 Meghna Asthana, William A. P. Smith, Patrik Huber

We propose to tackle the multiview photometric stereo problem using an extension of Neural Radiance Fields (NeRFs), conditioned on light source direction.

Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior

no code implementations7 Jun 2022 James A. D. Gardner, Bernhard Egger, William A. P. Smith

Training our model on a curated dataset of 1. 6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations.

Inverse Rendering

Self-supervised Outdoor Scene Relighting

no code implementations ECCV 2020 Ye Yu, Abhimitra Meka, Mohamed Elgharib, Hans-Peter Seidel, Christian Theobalt, William A. P. Smith

Outdoor scene relighting is a challenging problem that requires good understanding of the scene geometry, illumination and albedo.

Outdoor inverse rendering from a single image using multiview self-supervision

1 code implementation12 Feb 2021 Ye Yu, William A. P. Smith

In this paper we show how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network.

Intrinsic Image Decomposition Inverse Rendering

Least squares surface reconstruction on arbitrary domains

1 code implementation ECCV 2020 Dizhong Zhu, William A. P. Smith

Almost universally in computer vision, when surface derivatives are required, they are computed using only first order accurate finite difference approximations.

Surface Reconstruction

A Morphable Face Albedo Model

1 code implementation CVPR 2020 William A. P. Smith, Alassane Seck, Hannah Dee, Bernard Tiddeman, Joshua Tenenbaum, Bernhard Egger

In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling.

Art Analysis Face Model +1

Towards a complete 3D morphable model of the human head

1 code implementation18 Nov 2019 Stylianos Ploumpis, Evangelos Ververas, Eimear O' Sullivan, Stylianos Moschoglou, Haoyang Wang, Nick Pears, William A. P. Smith, Baris Gecer, Stefanos Zafeiriou

Eye and eye region models are incorporated into the head model, along with basic models of the teeth, tongue and inner mouth cavity.

Face Model

Depth from a polarisation + RGB stereo pair

no code implementations CVPR 2019 Dizhong Zhu, William A. P. Smith

In this paper, we propose a hybrid depth imaging system in which a polarisation camera is augmented by a second image from a standard digital camera.

Combining 3D Morphable Models: A Large scale Face-and-Head Model

1 code implementation CVPR 2019 Stylianos Ploumpis, Haoyang Wang, Nick Pears, William A. P. Smith, Stefanos Zafeiriou

Three-dimensional Morphable Models (3DMMs) are powerful statistical tools for representing the 3D surfaces of an object class.

Decomposing multispectral face images into diffuse and specular shading and biophysical parameters

no code implementations18 Feb 2019 Sarah Alotaibi, William A. P. Smith

We propose a novel biophysical and dichromatic reflectance model that efficiently characterises spectral skin reflectance.

InverseRenderNet: Learning single image inverse rendering

1 code implementation CVPR 2019 Ye Yu, William A. P. Smith

By incorporating a differentiable renderer, our network can learn from self-supervision.

Inverse Rendering

Statistical transformer networks: learning shape and appearance models via self supervision

no code implementations7 Apr 2018 Anil Bas, William A. P. Smith

In this configuration, our model learns an active appearance model and a means to fit the model from scratch with no supervision at all, even identity labels.

A 3D Morphable Model of Craniofacial Shape and Texture Variation

no code implementations ICCV 2017 Hang Dai, Nick Pears, William A. P. Smith, Christian Duncan

We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping.

Optical Flow Estimation

3D Morphable Models as Spatial Transformer Networks

1 code implementation23 Aug 2017 Anil Bas, Patrik Huber, William A. P. Smith, Muhammad Awais, Josef Kittler

In this paper, we show how a 3D Morphable Model (i. e. a statistical model of the 3D shape of a class of objects such as faces) can be used to spatially transform input data as a module (a 3DMM-STN) within a convolutional neural network.

What does 2D geometric information really tell us about 3D face shape?

1 code implementation22 Aug 2017 Anil Bas, William A. P. Smith

We show that this is not the case and that geometric information is an ambiguous cue.

3D Reconstruction

BRISKS: Binary Features for Spherical Images on a Geodesic Grid

no code implementations CVPR 2017 Hao Guan, William A. P. Smith

For interest point detection, we use a variant of the Accelerated Segment Test (AST) corner detector which operates on our geodesic grid.

Interest Point Detection

Ear-to-ear Capture of Facial Intrinsics

no code implementations8 Sep 2016 Alassane Seck, William A. P. Smith, Arnaud Dessein, Bernard Tiddeman, Hannah Dee, Abhishek Dutta

We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i. e. diffuse and specular albedo).

Face Model

Functional Faces: Groupwise Dense Correspondence Using Functional Maps

no code implementations CVPR 2016 Chao Zhang, William A. P. Smith, Arnaud Dessein, Nick Pears, Hang Dai

In this paper we present a method for computing dense correspondence between a set of 3D face meshes using functional maps.

Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences

1 code implementation2 Feb 2016 Anil Bas, William A. P. Smith, Timo Bolkart, Stefanie Wuhrer

We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting.

Example-Based Modeling of Facial Texture From Deficient Data

no code implementations ICCV 2015 Arnaud Dessein, William A. P. Smith, Richard C. Wilson, Edwin R. Hancock

We present an approach to modeling ear-to-ear, high-quality texture from one or more partial views of a face with possibly poor resolution and noise.

Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.