The LOL dataset is composed of 500 low-light and normal-light image pairs and divided into 485 training pairs and 15 testing pairs. The low-light images contain noise produced during the photo capture process. Most of the images are indoor scenes. All the images have a resolution of 400×600.
227 PAPERS • 2 BENCHMARKS
The 3DMATCH benchmark evaluates how well descriptors (both 2D and 3D) can establish correspondences between RGB-D frames of different views. The dataset contains 2D RGB-D patches and 3D patches (local TDF voxel grid volumes) of wide-baselined correspondences.
166 PAPERS • 3 BENCHMARKS
The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image.
154 PAPERS • 11 BENCHMARKS
The See-in-the-Dark (SID) dataset contains 5094 raw short-exposure images, each with a corresponding long-exposure reference image. Images were captured using two cameras: Sony α7SII and Fujifilm X-T2.
139 PAPERS • 3 BENCHMARKS
Visible-infrared Paired Dataset for Low-light Vision 30976 images (15488 pairs) 24 dark scenes, 2 daytime scenes Support for image-to-image translation (visible to infrared, or infrared to visible), visible and infrared image fusion, low-light pedestrian detection, and infrared pedestrian detection (The original image and video pairs (before registration) of LLVIP are also released!)
76 PAPERS • 7 BENCHMARKS
DICM is a dataset for low-light enhancement which consists of 69 images collected with commercial digital cameras.
73 PAPERS • 1 BENCHMARK
The Exclusively Dark (ExDARK) dataset is a collection of 7,363 low-light images from very low-light environments to twilight (i.e 10 different conditions) with 12 object classes (similar to PASCAL VOC) annotated on both image class level and local object bounding boxes.
53 PAPERS • 1 BENCHMARK
The MIT-Adobe FiveK dataset consists of 5,000 photographs taken with SLR cameras by a set of different photographers. They are all in RAW format; that is, all the information recorded by the camera sensor is preserved. We made sure that these photographs cover a broad range of scenes, subjects, and lighting conditions. We then hired five photography students in an art school to adjust the tone of the photos. Each of them retouched all the 5,000 photos using a software dedicated to photo adjustment (Adobe Lightroom) on which they were extensively trained. We asked the retouchers to achieve visually pleasing renditions, akin to a postcard. The retouchers were compensated for their work.
27 PAPERS • 4 BENCHMARKS
LOL-v2-real contains 689 low-/normal-light image pairs for training and 100 pairs for testing.
21 PAPERS • 1 BENCHMARK
This is the low-light image enhancement dataset collected by the CVPR 2018 paper "Seeing Motion in the Dark".
20 PAPERS • 1 BENCHMARK
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement
10 PAPERS • 1 BENCHMARK
The real captured dataset of LOL contains 500 low/normallight image pairs. Most low-light images are collected by changing exposure time and ISO, while other configurations of the cameras are fixed. We capture images from a variety of scenes, e.g., houses, campuses, clubs, streets.
Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural con
9 PAPERS • 1 BENCHMARK
To make synthetic images match the property of real dark photography, we analyze the illumination distribution of low-light images. We collect 270 low-light images from public MEF [42], NPE [6], LIME [8], DICM [43], VV,2 and Fusion [44] dataset, transform the imagesT into YCbCr channel and calculate the histogram of Y channel. We also collect 1000 raw images from RAISE [45] as normal-light images and calculate the histogram of Y channel in YCbCr.
7 PAPERS • 1 BENCHMARK
LoLi-Phone is a large-scale low-light image and video dataset for Low-light image enhancement (LLIE). The images and videos are taken by different mobile phones' cameras under diverse illumination conditions.
5 PAPERS • NO BENCHMARKS YET
The dataset collected by the paper Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment, ICCV 2021
4 PAPERS • 1 BENCHMARK
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment
Original SID dataset is introduced in "Learning to See in the Dark". The subset of SID dataset captured by Sony α7S II camera is adopted for evaluation. There are 2697 short-long-exposure RAW image pairs. To make this dataset more challenging, we converted the RAW format images to sRGB images with no gamma correction, which resulted in images becoming extremely dark.
3 PAPERS • 1 BENCHMARK
The goal of this project is to present two new datasets that seek to expand the capability of the Learning to See in the Dark Low-light enhancement CNN for the Canon 6D DSLR, and explore how the network performs when modified in various ways, both pruning it and making it deeper.
1 PAPER • 2 BENCHMARKS
LLNeRF Dataset is a real-world dataset as a benchmark for model learning and evaluation. To obtain real low-illumination images with real noise distributions, photos are taken at nighttime outdoor scenes or low-light indoor scenes containing diverse objects. Since the ISP operations are device dependent and the noise distributions across devices are also different, the data is collected using a mobile phone camera and a DSLR camera to enrich the diversity of the dataset.
1 PAPER • NO BENCHMARKS YET
Dataset release for the BMVC 2021 Paper "Few-Shot Domain Adaptation for Low Light RAW Image Enhancement"
We introduce the benchmark dataset “Low-light Images of Streets (LoLI-Street),” which contains three subsets: train, validation, and test. The train and validation sets consist of 30k and 3k paired low-light and high-light images, respectively, and the real low-light test set (RLLT) contains 1k images under real-world low-light conditions, totaling 33k images.
0 PAPER • NO BENCHMARKS YET
Introduced by Khan. et al. Divide and conquer: Ill-light image enhancement via hybrid deep network https://www.sciencedirect.com/science/article/abs/pii/S0957417421004759