Novel View Synthesis
78 papers with code • 10 benchmarks • 12 datasets
Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )
Libraries
Use these libraries to find Novel View Synthesis models and implementationsDatasets
Most implemented papers
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
View Synthesis by Appearance Flow
We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.
NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.
Deferred Neural Rendering: Image Synthesis using Neural Textures
Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.
SynSin: End-to-end View Synthesis from a Single Image
Single image view synthesis allows for the generation of new views of a scene given a single input image.
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis
Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis.
pixelNeRF: Neural Radiance Fields from One or Few Images
This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).
Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.
NeRF--: Neural Radiance Fields Without Known Camera Parameters
Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses.