Novel View Synthesis

325 papers with code • 17 benchmarks • 33 datasets

Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.

See Wiki for more introdcutions.

The Synthesis method include: NeRF, MPI and so on.

( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )

Libraries

Use these libraries to find Novel View Synthesis models and implementations

Most implemented papers

LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model

ewrfcas/leftrefill 19 May 2023

As an exemplar, we leverage LeftRefill to address two different challenges: reference-guided inpainting and novel view synthesis, based on the pre-trained StableDiffusion.

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

silverbottlep/tvsn CVPR 2017

Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion.

Monocular Neural Image Based Rendering with Continuous View Control

xuchen-ethz/continuous_view_synthesis ICCV 2019

The approach is self-supervised and only requires 2D images and associated view transforms for training.

Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

svip-lab/impersonator ICCV 2019

In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape.

A Neural Rendering Framework for Free-Viewpoint Relighting

LansburyCH/relightable-nr CVPR 2020

We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

iPERDance/iPERCore 18 Nov 2020

Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis.

pixelNeRF: Neural Radiance Fields from One or Few Images

sxyu/pixel-nerf CVPR 2021

This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

facebookresearch/nonrigid_nerf ICCV 2021

We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e. g. a `bullet-time' video effect.

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis

ajayjain/DietNeRF ICCV 2021

We present DietNeRF, a 3D neural scene representation estimated from a few images.

Neural RGB-D Surface Reconstruction

dazinovic/neural-rgbd-surface-reconstruction CVPR 2022

Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR.