Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.
The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.
We address the task of multi-view novel view synthesis, where we are interested in synthesizing a target image with an arbitrary camera pose from given source images.
In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.
We present a new deep learning approach to blending for IBR, in which we use held-out real image data to learn blending weights to combine input photo contributions.
The approach is self-supervised and only requires 2D images and associated view transforms for training.