Color Constancy
33 papers with code • 1 benchmarks • 5 datasets
Color Constancy is the ability of the human vision system to perceive the colors of the objects in the scene largely invariant to the color of the light source. The task of computational Color Constancy is to estimate the scene illumination and then perform the chromatic adaptation in order to remove the influence of the illumination color on the colors of the objects in the scene.
Latest papers with no code
Self-Supervised Learning of Color Constancy
Color constancy (CC) describes the ability of the visual system to perceive an object as having a relatively constant color despite changes in lighting conditions.
Optimizing Illuminant Estimation in Dual-Exposure HDR Imaging
Within camera ISP pipeline, illuminant estimation is a crucial step aiming to estimate the color of the global illuminant in the scene.
Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant Scenes
Motivated by this, we propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
Investigating Color Illusions from the Perspective of Computational Color Constancy
We argue that any model that can reproduce our sensation on color illusions should also be able to provide pixel-wise estimates of the light source.
Practical cross-sensor color constancy using a dual-mapping strategy
Deep Neural Networks (DNNs) have been widely used for illumination estimation, which is time-consuming and requires sensor-specific data collection.
Fooling Polarization-based Vision using Locally Controllable Polarizing Projection
If so, is that possible to realize these adversarial attacks in the physical world, without being perceived by human eyes?
MIMT: Multi-Illuminant Color Constancy via Multi-Task Local Surface and Light Color Learning
To have better cues of the local surface/light colors under multiple light color conditions, we design a novel multi-task learning framework.
Template matching with white balance adjustment under multiple illuminants
In this paper, we propose a novel template matching method with a white balancing adjustment, called N-white balancing, which was proposed for multi-illuminant scenes.
Dynamic Dense RGB-D SLAM using Learning-based Visual Odometry
We propose a dense dynamic RGB-D SLAM pipeline based on a learning-based visual odometry, TartanVO.
Designing Perceptual Puzzles by Differentiating Probabilistic Programs
We design new visual illusions by finding "adversarial examples" for principled models of human perception -- specifically, for probabilistic models, which treat vision as Bayesian inference.