Neural Rendering
141 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsLatest papers with no code
Inverse Neural Rendering for Explainable Multi-Object Tracking
We propose to recast 3D multi-object tracking from RGB cameras as an \emph{Inverse Rendering (IR)} problem, by optimizing via a differentiable rendering pipeline over the latent space of pre-trained 3D object representations and retrieve the latents that best represent object instances in a given input image.
RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images.
Portrait3D: Text-Guided High-Quality 3D Portrait Generation Using Pyramid Representation and GANs Prior
Existing neural rendering-based text-to-3D-portrait generation methods typically make use of human geometry prior and diffusion models to obtain guidance.
3D Gaussian Splatting as Markov Chain Monte Carlo
While 3D Gaussian Splatting has recently become popular for neural rendering, current methods rely on carefully engineered cloning and splitting strategies for placing Gaussians, which does not always generalize and may lead to poor-quality renderings.
HFNeRF: Learning Human Biomechanic Features with Neural Radiance Fields
In recent advancements in novel view synthesis, generalizable Neural Radiance Fields (NeRF) based methods applied to human subjects have shown remarkable results in generating novel views from few images.
Flying with Photons: Rendering Novel Views of Propagating Light
Combined with this dataset, we introduce an efficient neural volume rendering framework based on the transient field.
3D Facial Expressions through Analysis-by-Neural-Synthesis
Instead, SMIRK replaces the differentiable rendering with a neural rendering module that, given the rendered predicted mesh geometry, and sparsely sampled pixels of the input image, generates a face image.
Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table Blendshapes
To address these challenges, we propose a novel fast 3D neural implicit head avatar model that achieves real-time rendering while maintaining fine-grained controllability and high rendering quality.
SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance
Neural Radiance Field (NeRF) technology has made significant strides in creating novel viewpoints.
SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior
To tackle this problem, we propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model along with complementary multi-modal data.