In our solution, we use a few images of a face to perform 3D reconstruction, and we introduce the notion of the GAN camera manifold, the key element allowing us to precisely define the range of images that the GAN can reproduce in a stable manner.
We design a convolutional network around input feature maps that facilitate learning of an implicit representation of scene materials and illumination, enabling both relighting and free-viewpoint navigation.
Our solution is extremely simple: we fine-tune a deep appearance-capture network on the provided exemplars, such that it learns to extract similar SVBRDF values from the target image.
Empowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph.
We present a new deep learning approach to blending for IBR, in which we use held-out real image data to learn blending weights to combine input photo contributions.
Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in single pictures.