Novel View Synthesis
172 papers with code • 15 benchmarks • 22 datasets
Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )
Libraries
Use these libraries to find Novel View Synthesis models and implementationsMost implemented papers
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
NeRF--: Neural Radiance Fields Without Known Camera Parameters
Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses.
PlenOctrees for Real-time Rendering of Neural Radiance Fields
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.
View Synthesis by Appearance Flow
We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.
NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.
HoloGAN: Unsupervised learning of 3D representations from natural images
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.
Deferred Neural Rendering: Image Synthesis using Neural Textures
Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.
SynSin: End-to-end View Synthesis from a Single Image
Single image view synthesis allows for the generation of new views of a scene given a single input image.
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis
Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis.