Generalizable Novel View Synthesis
14 papers with code • 7 benchmarks • 7 datasets
Benchmarks
These leaderboards are used to track progress in Generalizable Novel View Synthesis
Most implemented papers
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.
pixelNeRF: Neural Radiance Fields from One or Few Images
This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).
Stereo Magnification with Multi-Layer Images
The second stage infers the color and the transparency values for these layers producing the final representation for novel view synthesis.
NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes
NeO 360's representation allows us to learn from a large collection of unbounded 3D scenes while offering generalizability to new views and novel scenes from as few as a single image during inference.
Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering
To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints
In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views.
Is Attention All That NeRF Needs?
While prior works on NeRFs optimize a scene representation by inverting a handcrafted rendering equation, GNT achieves neural representation and rendering that generalizes across scenes using transformers at two stages.
Self-improving Multiplane-to-layer Images for Novel View Synthesis
We present a new method for lightweight novel-view synthesis that generalizes to an arbitrary forward-facing scene.
Semantic Ray: Learning a Generalizable Semantic Field with Cross-Reprojection Attention
In this paper, we aim to learn a semantic radiance field from multiple scenes that is accurate, efficient and generalizable.
Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs
This paper aims to develop an accurate 3D geometry representation of satellite images using satellite-ground image pairs.