Extracting Triangular 3D Models, Materials, and Lighting From Images

We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations. Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified. We leverage recent work in differentiable rendering, coordinate-based networks to compactly represent volumetric texturing, alongside differentiable marching tetrahedrons to enable gradient-based optimization directly on the surface mesh. Finally, we introduce a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting. Experiments show our extracted models used in advanced scene editing, material decomposition, and high quality view interpolation, all running at interactive rates in triangle-based renderers (rasterizers and path tracers). Project website: https://nvlabs.github.io/nvdiffrec/ .

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Depth Prediction Stanford-ORB NVDiffRec Si-MSE 0.31 # 2
Surface Normals Estimation Stanford-ORB NVDiffRec Cosine Distance 0.06 # 2
Surface Reconstruction Stanford-ORB NVDiffRec Chamfer Distance 0.62 # 4
Image Relighting Stanford-ORB NVDiffRec HDR-PSNR 22.91 # 6
SSIM 0.963 # 5
LPIPS 0.039 # 3
Inverse Rendering Stanford-ORB NVDiffRec HDR-PSNR 22.91 # 6


No methods listed for this paper. Add relevant methods here