Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.
We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.
We address the task of multi-view novel view synthesis, where we are interested in synthesizing a target image with an arbitrary camera pose from given source images.
The approach is self-supervised and only requires 2D images and associated view transforms for training.
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.