Low-Light Image Enhancement
114 papers with code • 21 benchmarks • 21 datasets
Low-Light Image Enhancement is a computer vision task that involves improving the quality of images captured under low-light conditions. The goal of low-light image enhancement is to make images brighter, clearer, and more visually appealing, without introducing too much noise or distortion.
Libraries
Use these libraries to find Low-Light Image Enhancement models and implementationsMost implemented papers
LIME: Low-light Image Enhancement via Illumination Map Estimation
When one captures images in low-light conditions, the images often suffer from low visibility.
Getting to Know Low-light Images with The Exclusively Dark Dataset
Thus, we propose the Exclusively Dark dataset to elevate this data drought, consisting exclusively of ten different types of low-light images (i. e. low, ambient, object, single, weak, strong, screen, window, shadow and twilight) captured in visible light only with image and object level annotations.
Local Color Distributions Prior for Image Enhancement
Existing image enhancement methods are typically designed to address either the over- or under-exposure problem in the input image.
Enlighten Anything: When Segment Anything Model Meets Low-Light Image Enhancement
Image restoration is a low-level visual task, and most CNN methods are designed as black boxes, lacking transparency and intrinsic aesthetics.
Joint Correcting and Refinement for Balanced Low-Light Image Enhancement
Specifically, the proposed method, so-called Joint Correcting and Refinement Network (JCRNet), which mainly consists of three stages to balance brightness, color, and illumination of enhancement.
Low-light Image Enhancement via CLIP-Fourier Guided Wavelet Diffusion
Moreover, to further promote the effective recovery of the image details, we combine the Fourier transform based on the wavelet transform and construct a Hybrid High Frequency Perception Module (HFPM) with a significant perception of the detailed features.
STAR: A Structure and Texture Aware Retinex Model
A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image.
Seeing Motion in the Dark
By carefully designing a learning-based pipeline and introducing a new loss function to encourage temporal stability, we train a siamese network on static raw videos, for which ground truth is available, such that the network generalizes to videos of dynamic scenes at test time.
Self-supervised Image Enhancement Network: Training with Low Light Images Only
We introduce a constraint that the maximum channel of the reflectance conforms to the maximum channel of the low light image and its entropy should be largest in our model to achieve self-supervised learning.
Image Demoireing with Learnable Bandpass Filters
Image demoireing is a multi-faceted image restoration task involving both texture and color restoration.