Inverse Rendering

63 papers with code • 1 benchmarks • 3 datasets

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Most implemented papers

ADOP: Approximate Differentiable One-Pixel Point Rendering

darglein/ADOP 13 Oct 2021

Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud.

Extracting Triangular 3D Models, Materials, and Lighting From Images

NVlabs/nvdiffrec CVPR 2022

We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.

Intrinsic Image Decomposition via Ordinal Shading

compphoto/Intrinsic ACM Transactions on Graphics 2023

We encourage the model to learn an accurate decomposition by computing losses on the estimated shading as well as the albedo implied by the intrinsic model.

SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild

senguptaumd/SfSNet CVPR 2018

SfSNet learns from a mixture of labeled synthetic and unlabeled real world images.

RenderNet: A deep convolutional network for differentiable rendering from 3D shapes

thunguyenphuoc/RenderNet NeurIPS 2018

We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.

Differentiable Monte Carlo Ray Tracing through Edge Sampling

BachiLi/redner SIGGRAPH 2018

We introduce a general-purpose differentiable ray tracer, which, to our knowledge, is the first comprehensive solution that is able to compute derivatives of scalar functions over a rendered image with respect to arbitrary scene parameters such as camera pose, scene geometry, materials, and lighting parameters.

InverseRenderNet: Learning single image inverse rendering

YeeU/InverseRenderNet CVPR 2019

By incorporating a differentiable renderer, our network can learn from self-supervision.

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF from a Single Image

lzqsd/InverseRenderingOfIndoorScene CVPR 2020

Our inverse rendering network incorporates physical insights -- including a spatially-varying spherical Gaussian lighting representation, a differentiable rendering layer to model scene appearance, a cascade structure to iteratively refine the predictions and a bilateral solver for refinement -- allowing us to jointly reason about shape, lighting, and reflectance.

Differentiable Surface Splatting for Point-based Geometry Processing

yifita/DSS 10 Jun 2019

We propose Differentiable Surface Splatting (DSS), a high-fidelity differentiable renderer for point clouds.

Deep Single-Image Portrait Relighting

zhhoper/DPR ICCV 2019

In this work, we apply a physically-based portrait relighting method to generate a large scale, high quality, "in the wild" portrait relighting dataset (DPR).