Novel View Synthesis

324 papers with code • 17 benchmarks • 34 datasets

Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.

See Wiki for more introdcutions.

The Synthesis method include: NeRF, MPI and so on.

( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )


Use these libraries to find Novel View Synthesis models and implementations

Most implemented papers

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

bmild/nerf ECCV 2020

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

nvlabs/instant-ngp 16 Jan 2022

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Totoro97/NeuS NeurIPS 2021

In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.

NeRF--: Neural Radiance Fields Without Known Camera Parameters

ActiveVisionLab/nerfmm 14 Feb 2021

Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

sxyu/plenoctree ICCV 2021

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.

View Synthesis by Appearance Flow

RenYurui/Global-Flow-Local-Attention 11 May 2016

We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.

Deferred Neural Rendering: Image Synthesis using Neural Textures

SSRSGJYD/NeuralTexture 28 Apr 2019

Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.

HoloGAN: Unsupervised learning of 3D representations from natural images

thunguyenphuoc/HoloGAN ICCV 2019

This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.

SynSin: End-to-end View Synthesis from a Single Image

facebookresearch/synsin CVPR 2020

Single image view synthesis allows for the generation of new views of a scene given a single input image.

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

zju3dv/neuralbody CVPR 2021

To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.