13 papers with code • 1 benchmarks • 1 datasets
Image relighting involves changing the illumination settings of an image.
The proposed hazing method does not rely on the atmosphere scattering model, and we provide an explanation as to why it is not necessarily beneficial to take advantage of the dimension reduction offered by the atmosphere scattering model for image dehazing, even if only the dehazing results on synthetic images are concerned.
Ranked #6 on Image Relighting on VIDIT’20 validation set
The first track considered one-to-one relighting; the objective was to relight an input photo of a scene with a different color temperature and illuminant orientation (i. e., light source position).
Deep image relighting is gaining more interest lately, as it allows photo enhancement through illumination-specific retouching without human effort.
Manipulating the light source of given images is an interesting task and useful in various applications, including photography and cinematography.
Ranked #2 on Image Relighting on VIDIT’20 validation set
However, many problems for scene reconversion and shadow estimation tasks, including uncalibrated feature information and poor semantic information, are still unresolved, thereby resulting in insufficient feature representation.
This problem is not a trivial one and can become even more complicated if we want to change the direction of the light source from any direction to a specific one.
While our proposed method applies to both one-to-one and any-to-any relighting problems, for each case we introduce problem-specific components that enrich the model performance: 1) For one-to-one relighting we incorporate normal vectors of the surfaces in the scene to adjust gloss and shadows accordingly in the image.
Ranked #1 on Image Relighting on VIDIT’20 validation set
This model extracts the image and the depth features by the bifurcated network in the encoder.
Depth guided any-to-any image relighting aims to generate a relit image from the original image and corresponding depth maps to match the illumination setting of the given guided image and its depth map.