Generalizable Novel View Synthesis

10 papers with code • 2 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

bmild/nerf ECCV 2020

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

pixelNeRF: Neural Radiance Fields from One or Few Images

sxyu/pixel-nerf CVPR 2021

This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).

Stereo Magnification with Multi-Layer Images

SamsungLabs/MLI CVPR 2022

The second stage infers the color and the transparency values for these layers producing the final representation for novel view synthesis.

Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering

YoungJoongUNC/Neural_Human_Performer NeurIPS 2021

To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.

KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints

facebookresearch/KeypointNeRF 10 May 2022

In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views.

Self-improving Multiplane-to-layer Images for Novel View Synthesis

SamsungLabs/MLI 4 Oct 2022

We present a new method for lightweight novel-view synthesis that generalizes to an arbitrary forward-facing scene.

Semantic Ray: Learning a Generalizable Semantic Field with Cross-Reprojection Attention

liuff19/Semantic-Ray CVPR 2023

In this paper, we aim to learn a semantic radiance field from multiple scenes that is accurate, efficient and generalizable.

Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs

qianmingduowan/Sat2Density ICCV 2023

This paper aims to develop an accurate 3D geometry representation of satellite images using satellite-ground image pairs.

NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes

zubair-irshad/NeO-360 ICCV 2023

NeO 360's representation allows us to learn from a large collection of unbounded 3D scenes while offering generalizability to new views and novel scenes from as few as a single image during inference.

Pose-Free Generalizable Rendering Transformer

zhiwenfan/DragView 5 Oct 2023

To address this challenge, we introduce PF-GRT, a new Pose-Free framework for Generalizable Rendering Transformer, eliminating the need for pre-computed camera poses and instead leveraging feature-matching learned directly from data.