Image Relighting
25 papers with code • 2 benchmarks • 3 datasets
Image relighting involves changing the illumination settings of an image.
Libraries
Use these libraries to find Image Relighting models and implementationsMost implemented papers
2D Image Relighting with Image-to-Image Translation
This problem is not a trivial one and can become even more complicated if we want to change the direction of the light source from any direction to a specific one.
Deep Relighting Networks for Image Light Source Manipulation
Manipulating the light source of given images is an interesting task and useful in various applications, including photography and cinematography.
NeRD: Neural Reflectance Decomposition from Image Collections
This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination.
Relighting Images in the Wild with a Self-Supervised Siamese Auto-Encoder
We propose a self-supervised method for image relighting of single view images in the wild.
Multi-scale Self-calibrated Network for Image Light Source Transfer
However, many problems for scene reconversion and shadow estimation tasks, including uncalibrated feature information and poor semantic information, are still unresolved, thereby resulting in insufficient feature representation.
S3Net: A Single Stream Structure for Depth Guided Image Relighting
Depth guided any-to-any image relighting aims to generate a relit image from the original image and corresponding depth maps to match the illumination setting of the given guided image and its depth map.
Physically Inspired Dense Fusion Networks for Relighting
While our proposed method applies to both one-to-one and any-to-any relighting problems, for each case we introduce problem-specific components that enrich the model performance: 1) For one-to-one relighting we incorporate normal vectors of the surfaces in the scene to adjust gloss and shadows accordingly in the image.
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images.
Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation
Specifically, in the image translation stage, Bi-Mix leverages the knowledge of day-night image pairs to improve the quality of nighttime image relighting.