Neural Rendering
144 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsLatest papers with no code
HFNeRF: Learning Human Biomechanic Features with Neural Radiance Fields
In recent advancements in novel view synthesis, generalizable Neural Radiance Fields (NeRF) based methods applied to human subjects have shown remarkable results in generating novel views from few images.
Flying with Photons: Rendering Novel Views of Propagating Light
Combined with this dataset, we introduce an efficient neural volume rendering framework based on the transient field.
3D Facial Expressions through Analysis-by-Neural-Synthesis
Instead, SMIRK replaces the differentiable rendering with a neural rendering module that, given the rendered predicted mesh geometry, and sparsely sampled pixels of the input image, generates a face image.
Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table Blendshapes
To address these challenges, we propose a novel fast 3D neural implicit head avatar model that achieves real-time rendering while maintaining fine-grained controllability and high rendering quality.
SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance
Neural Radiance Field (NeRF) technology has made significant strides in creating novel viewpoints.
SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior
To tackle this problem, we propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model along with complementary multi-modal data.
HO-Gaussian: Hybrid Optimization of 3D Gaussian Splatting for Urban Scenes
The rapid growth of 3D Gaussian Splatting (3DGS) has revolutionized neural rendering, enabling real-time production of high-quality renderings.
XScale-NVS: Cross-Scale Novel View Synthesis with Hash Featurized Manifold
We also introduce a novel dataset, namely GigaNVS, to benchmark cross-scale, high-resolution novel view synthesis of realworld large-scale scenes.
Within the Dynamic Context: Inertia-aware 3D Human Modeling with Pose Sequence
Neural rendering techniques have significantly advanced 3D human body modeling.
Towards 3D Vision with Low-Cost Single-Photon Cameras
We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras.