Novel View Synthesis

501 papers with code • 20 benchmarks • 39 datasets

Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.

See Wiki for more introductions.

The Synthesis method include: NeRF, MPI and so on.

( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )

Libraries

Use these libraries to find Novel View Synthesis models and implementations

Most implemented papers

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

bmild/nerf ECCV 2020

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

nvlabs/instant-ngp 16 Jan 2022

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Totoro97/NeuS NeurIPS 2021

In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.

NeRF--: Neural Radiance Fields Without Known Camera Parameters

ActiveVisionLab/nerfmm 14 Feb 2021

Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

sxyu/plenoctree ICCV 2021

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.

View Synthesis by Appearance Flow

RenYurui/Global-Flow-Local-Attention 11 May 2016

We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.

Deferred Neural Rendering: Image Synthesis using Neural Textures

SSRSGJYD/NeuralTexture 28 Apr 2019

Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.

HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression

yihangchen-ee/hac 21 Mar 2024

3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis, boasting rapid rendering speed with high fidelity.

Fast Feedforward 3D Gaussian Splatting Compression

yihangchen-ee/fcgs 10 Oct 2024

With 3D Gaussian Splatting (3DGS) advancing real-time and high-fidelity rendering for novel view synthesis, storage requirements pose challenges for their widespread adoption.

HAC++: Towards 100X Compression of 3D Gaussian Splatting

yihangchen-ee/hac-plus 21 Jan 2025

3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis, boasting rapid rendering speed with high fidelity.