Image Relighting
25 papers with code • 2 benchmarks • 3 datasets
Image relighting involves changing the illumination settings of an image.
Libraries
Use these libraries to find Image Relighting models and implementationsLatest papers
Intrinsic Harmonization for Illumination-Aware Compositing
Despite significant advancements in network-based image harmonization techniques, there still exists a domain disparity between typical training pairs and real-world composites encountered during inference.
Stanford-ORB: A Real-World 3D Object Inverse Rendering Benchmark
We introduce Stanford-ORB, a new real-world 3D Object inverse Rendering Benchmark.
NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field Indirect Illumination
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
Designing An Illumination-Aware Network for Deep Image Relighting
Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images.
Local Relighting of Real Scenes
We introduce the task of local relighting, which changes a photograph of a scene by switching on and off the light sources that are visible within the image.
Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising
Unfortunately, Monte Carlo integration provides estimates with significant noise, even at large sample counts, which makes gradient-based inverse rendering very challenging.
Extracting Triangular 3D Models, Materials, and Lighting From Images
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.
Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation
Specifically, in the image translation stage, Bi-Mix leverages the knowledge of day-night image pairs to improve the quality of nighttime image relighting.
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images.
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.