NeRD: Neural Reflectance Decomposition from Image Collections

Decomposing a scene into its shape, reflectance, and illumination is a challenging but important problem in computer vision and graphics. This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination. Though recent work has shown that implicit representations can be used to model the radiance field of an object, most of these techniques only enable view synthesis and not relighting. Additionally, evaluating these radiance fields is resource and time-intensive. We propose a neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties. In contrast to existing techniques, our input images can be captured under different illumination conditions. In addition, we also propose techniques to convert the learned reflectance volume into a relightable textured mesh enabling fast real-time rendering with novel illuminations. We demonstrate the potential of the proposed approach with experiments on both synthetic and real datasets, where we are able to obtain high-quality relightable 3D assets from image collections. The datasets and code is available on the project page: https://markboss.me/publication/2021-nerd/

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Relighting Stanford-ORB NeRD HDR-PSNR 23.29 # 5
SSIM 0.957 # 7
LPIPS 0.059 # 7
Depth Prediction Stanford-ORB NeRD Si-MSE 1.39 # 6
Surface Normals Estimation Stanford-ORB NeRD Cosine Distance 0.28 # 6
Surface Reconstruction Stanford-ORB NeRD Chamfer Distance 13.70 # 7
Inverse Rendering Stanford-ORB NeRD HDR-PSNR 23.29 # 5

Methods