However, the implicit nature of neural implicit representations results in slow inference time and requires careful initialization.
At test time, our model generates images with explicit control over the camera as well as the shape and appearance of the scene.
While several recent works investigate how to disentangle underlying factors of variation in the data, most of them operate in 2D and hence ignore that our world is three-dimensional.
In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity.
Ranked #2 on Scene Generation on VizDoom
In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field.
In this work, we propose a differentiable rendering formulation for implicit shape and texture representations.
In order to perform dense 4D reconstruction from images or sparse point clouds, we combine our method with a continuous 3D representation.
A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques.
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity.