Novel View Synthesis

172 papers with code • 15 benchmarks • 22 datasets

Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.

( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )

Libraries

Use these libraries to find Novel View Synthesis models and implementations

Most implemented papers

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

bmild/nerf ECCV 2020

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

nvlabs/instant-ngp 16 Jan 2022

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.

NeRF--: Neural Radiance Fields Without Known Camera Parameters

ActiveVisionLab/nerfmm 14 Feb 2021

Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

sxyu/plenoctree ICCV 2021

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.

View Synthesis by Appearance Flow

RenYurui/Global-Flow-Local-Attention 11 May 2016

We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Totoro97/NeuS NeurIPS 2021

In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.

HoloGAN: Unsupervised learning of 3D representations from natural images

thunguyenphuoc/HoloGAN ICCV 2019

This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.

Deferred Neural Rendering: Image Synthesis using Neural Textures

SSRSGJYD/NeuralTexture 28 Apr 2019

Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.

SynSin: End-to-end View Synthesis from a Single Image

facebookresearch/synsin CVPR 2020

Single image view synthesis allows for the generation of new views of a scene given a single input image.

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

iPERDance/iPERCore 18 Nov 2020

Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis.