Color Constancy
35 papers with code • 1 benchmarks • 5 datasets
Color Constancy is the ability of the human vision system to perceive the colors of the objects in the scene largely invariant to the color of the light source. The task of computational Color Constancy is to estimate the scene illumination and then perform the chromatic adaptation in order to remove the influence of the illumination color on the colors of the objects in the scene.
Most implemented papers
Conditional GANs for Multi-Illuminant Color Constancy: Revolution or Yet Another Approach?
Non-uniform and multi-illuminant color constancy are important tasks, the solution of which will allow to discard information about lighting conditions in the image.
Artificial Color Constancy via GoogLeNet with Angular Loss Function
Color Constancy is the ability of the human visual system to perceive colors unchanged independently of the illumination.
The Visual Centrifuge: Model-Free Layered Video Representations
True video understanding requires making sense of non-lambertian scenes where the color of light arriving at the camera sensor encodes information about not just the last object it collided with, but about multiple mediums -- colored windows, dirty mirrors, smoke or rain.
Spectral Illumination Correction: Achieving Relative Color Constancy Under the Spectral Domain
Achieving color constancy between and within images, i. e., minimizing the color difference between the same object imaged under nonuniform and varied illuminations is crucial for computer vision tasks such as colorimetric analysis and object recognition.
Learning Matchable Image Transformations for Long-term Metric Visual Localization
Long-term metric self-localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems due to appearance changes caused by lighting, weather, or seasonal variations.
DeepIlluminance: Contextual Illuminance Estimation via Deep Neural Networks
First, the contextual net with a center-surround architecture extracts local contextual features from image patches, and generates initial illuminant estimates and the corresponding color corrected patches.
When Color Constancy Goes Wrong: Correcting Improperly White-Balanced Images
The challenge lies not in identifying what the correct white balance should have been, but in the fact that the in-camera white-balance procedure is followed by several camera-specific nonlinear color manipulations that make it challenging to correct the image's colors in post-processing.
Quasi-Unsupervised Color Constancy
After training, unbalanced images can be processed thanks to the preliminary conversion to grayscale of the input to the neural network.
Convolutional Neural Networks Can Be Deceived by Visual Illusions
In particular, we show that CNNs trained for image denoising, image deblurring, and computational color constancy are able to replicate the human response to visual illusions, and that the extent of this replication varies with respect to variation in architecture and spatial pattern size.
Bag of Color Features For Color Constancy
To further improve the illumination estimation accuracy, we propose a novel attention mechanism for the BoCF model with two variants based on self-attention.