This approach is predicated by neural network differentiability, the requirement that analytic derivatives of a given problem's task metric can be computed with respect to neural network's parameters.
The goal of this work is to perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments (e. g., Street View).
We demonstrate in several experiments the effectiveness of our approach in both synthetic and real images.
Based on these models, we introduce a new method that takes as input a single photo of a clothed player in any basketball pose and outputs a high resolution mesh and 3D pose for that player.
Finally, we show how our neural rendering framework can capture and faithfully render objects from real images and from a diverse set of classes.
We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device.
In this paper we are extracting surface reflectance and natural environmental illumination from a reflectance map, i. e. from a single 2D image of a sphere of one material under one illumination.
We propose a technique to use the structural information extracted from a 3D model that matches the image object in terms of viewpoint and shape.
Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem.
As the amount of visual data increases, so does the need for summarization tools that can be used to explore large image collections and to quickly get familiar with their content.
We propose a technique to use the structural information extracted from a set of 3D models of an object class to improve novel-view synthesis for images showing unknown instances of this class.