We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
Key to our approach is to exploit GANs as a multi-view data generator to train an inverse graphics network using an off-the-shelf differentiable renderer, and the trained inverse graphics network as a teacher to disentangle the GAN's latent code into interpretable 3D properties.
We consider the problem of optimizing the performance of an active imaging system by automatically discovering the illuminations it should use, and the way to decode them.
6 code implementations • 12 Nov 2019 • Krishna Murthy Jatavallabhula, Edward Smith, Jean-Francois Lafleche, Clement Fuji Tsang, Artem Rozantsev, Wenzheng Chen, Tommy Xiang, Rev Lebaredian, Sanja Fidler
We present Kaolin, a PyTorch library aiming to accelerate 3D deep learning research.
Many machine learning models operate on images, but ignore the fact that images are 2D projections formed by 3D geometry interacting with light, in a process called rendering.
Ranked #6 on Single-View 3D Reconstruction on ShapeNet
Relying on consumer color image sensors, with high fill factor, high quantum efficiency and low read-out noise, we demonstrate high-fidelity color NLOS imaging for scene configurations tackled before with picosecond time resolution.
We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask.
Human 3D pose estimation from a single image is a challenging task with numerous applications.