Intrinsic Image Decomposition
26 papers with code • 0 benchmarks • 8 datasets
Intrinsic Image Decomposition is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization.
Source: CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition
Benchmarks
These leaderboards are used to track progress in Intrinsic Image Decomposition
Datasets
Most implemented papers
Unsupervised Learning for Intrinsic Image Decomposition from a Single Image
Intrinsic image decomposition, which is an essential task in computer vision, aims to infer the reflectance and shading of the scene.
Illumination-Aware Image Quality Assessment for Enhanced Low-light Image
To reduce the overshoot effects of LIE, this paper proposes an illumination-aware image quality assessment, called LIE-IQA, for the enhanced low-light images.
Intrinsic Image Decomposition via Ordinal Shading
We encourage the model to learn an accurate decomposition by computing losses on the estimated shading as well as the albedo implied by the intrinsic model.
Exploiting Diffusion Prior for Generalizable Dense Prediction
Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate due to the immitigable domain gap.
Unified Depth Prediction and Intrinsic Image Decomposition from a Single Image via Joint Convolutional Neural Fields
We present a method for jointly predicting a depth map and intrinsic images from single-image input.
Learning Intrinsic Image Decomposition from Watching the World
However, it is difficult to collect ground truth training data at scale for intrinsic images.
Joint Learning of Intrinsic Images and Semantic Segmentation
To that end, we propose a supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation.
Learning Blind Video Temporal Consistency
Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video.
ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition
The aim is to distinguish strong photometric effects from reflectance variations.
Intrinsic Decomposition of Document Images In-the-Wild
However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models.