Intrinsic Image Decomposition
20 papers with code • 0 benchmarks • 6 datasets
Intrinsic Image Decomposition is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization.
Source: CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition
Benchmarks
These leaderboards are used to track progress in Intrinsic Image Decomposition
Latest papers
Exploiting Diffusion Prior for Generalizable Pixel-Level Semantic Prediction
Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf property semantic predictors to estimate due to the immitigable domain gap.
HyperDID: Hyperspectral Intrinsic Image Decomposition with Deep Feature Embedding
To address this limitation, this study rethinks hyperspectral intrinsic image decomposition for classification tasks by introducing deep feature embedding.
Intrinsic Image Decomposition via Ordinal Shading
We encourage the model to learn an accurate decomposition by computing losses on the estimated shading as well as the albedo implied by the intrinsic model.
DPF: Learning Dense Prediction Fields with Weak Supervision
We showcase the effectiveness of DPFs using two substantially different tasks: high-level semantic parsing and low-level intrinsic image decomposition.
Unsupervised Intrinsic Image Decomposition with LiDAR Intensity
Intrinsic image decomposition (IID) is the task that decomposes a natural image into albedo and shade.
Estimating Reflectance Layer from A Single Image: Integrating Reflectance Guidance and Shadow/Specular Aware Learning
To further enforce the reflectance layer to be independent of shadows and specularities in the second-stage refinement, we introduce an S-Aware network that distinguishes the reflectance image from the input image.
SIGNet: Intrinsic Image Decomposition by a Semantic and Invariant Gradient Driven Network for Indoor Scenes
An ablation study is conducted showing that the use of the proposed priors and progressive CNN increase the IID performance.
Creating a Forensic Database of Shoeprints from Online Shoe Tread Photos
We develop a method termed ShoeRinsics that learns to predict depth by leveraging a mix of fully supervised synthetic data and unsupervised retail image data.
PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image Decomposition
An extensive ablation study and large scale experiments are conducted showing that it is beneficial for edge-driven hybrid IID networks to make use of illumination invariant descriptors and that separating global and local cues helps in improving the performance of the network.
Illumination-Aware Image Quality Assessment for Enhanced Low-light Image
To reduce the overshoot effects of LIE, this paper proposes an illumination-aware image quality assessment, called LIE-IQA, for the enhanced low-light images.