NeRF represents a scene with learned, continuous volumetric radiance field $F_\theta$ defined over a bounded 3D volume. In a NeRF, $F_\theta$ is a multilayer perceptron (MLP) that takes as input a 3D position $x = (x, y, z)$ and unit-norm viewing direction $d = (dx, dy, dz)$, and produces as output a density $\sigma$ and color $c = (r, g, b)$. The weights of the multilayer perceptron that parameterize $F_\theta$ are optimized so as to encode the radiance field of the scene. Volume rendering is used to compute the color of a single pixel.
Source: NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Novel View Synthesis | 149 | 32.60% |
Neural Rendering | 39 | 8.53% |
3D Reconstruction | 36 | 7.88% |
Depth Estimation | 16 | 3.50% |
Image Generation | 16 | 3.50% |
3D-Aware Image Synthesis | 10 | 2.19% |
Semantic Segmentation | 9 | 1.97% |
Super-Resolution | 8 | 1.75% |
Pose Estimation | 8 | 1.75% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |