KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odome
3,406 PAPERS • 142 BENCHMARKS
A new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single Image DEhazing (RESIDE). RESIDE highlights diverse data sources and image contents, and is divided into five subsets, each serving different training or evaluation purposes.
43 PAPERS • 5 BENCHMARKS
DENSE (Depth Estimation oN Synthetic Events) is a new dataset with synthetic events and perfect ground truth.
40 PAPERS • 1 BENCHMARK
The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images.
35 PAPERS • 1 BENCHMARK
NN-HAZE is an image dehazing dataset. Since in many real cases haze is not uniformly distributed NH-HAZE, a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images. This is the first non-homogeneous image dehazing dataset and contains 55 outdoor scenes. The non-homogeneous haze has been introduced in the scene using a professional haze generator that imitates the real conditions of hazy scenes.
31 PAPERS • 4 BENCHMARKS
Haze4k is a synthesized dataset with 4,000 hazy images, in which each hazy image has the associate ground truths of a latent clean image, a transmission map, and an atmospheric light ma
23 PAPERS • 1 BENCHMARK
A dataset of images taken in different locations with varying water properties, showing color charts in the scenes. Moreover, to obtain ground truth, the 3D structure of the scene was calculated based on stereo imaging. This dataset enables a quantitative evaluation of restoration algorithms on natural images.
17 PAPERS • NO BENCHMARKS YET
The D-HAZY dataset is generated from NYU depth indoor image collection. D-HAZY contains depth map for each indoor hazy image. It contains 1400+ real images and corresponding depth maps used to synthesize hazy scenes based on Koschmieder’s light propagation mode
16 PAPERS • NO BENCHMARKS YET
A large-scale non-homogeneous remote sensing image dehazing dataset
11 PAPERS • 1 BENCHMARK
Consists of 2,864 videos each with a label from 25 different classes corresponding to an event unfolding 5 seconds. The ERA dataset is designed to have a significant intra-class variation and inter-class similarity and captures dynamic events in various circumstances and at dramatically various scales.
7 PAPERS • NO BENCHMARKS YET
A dataset of over 65,000 pairs of incorrectly white-balanced images and their corresponding correctly white-balanced images.
SEN12MS-CR-TS is a multi-modal and multi-temporal data set for cloud removal. It contains time-series of paired and co-registered Sentinel-1 and cloudy as well as cloud-free Sentinel-2 data from European Space Agency's Copernicus mission. Each time series contains 30 cloudy and clear observations regularly sampled throughout the year 2018. Our multi-temporal data set is readily pre-processed and backward-compatible with SEN12MS-CR.
7 PAPERS • 1 BENCHMARK
A large-scale video dataset for MOR in aerial videos.
6 PAPERS • NO BENCHMARKS YET
The laparoscopic surgery dataset is associated with our International Journal of Computer Assisted Radiology and Surgery (IJCARS) publication titled “DeSmoke-LAP: Improved Unpaired Image-to-Image Translation for Desmoking in Laparoscopic Surgery”. The training model of the proposed method is available as an open source on Github. We propose DeSmoke-LAP, a new method for removing smoke from real robotic laparoscopic hysterectomy videos. The proposed method is based on the unpaired image-to-image cycle-consistent generative adversarial network in which two novel loss functions, namely, inter-channel discrepancies and dark channel prior.
1 PAPER • NO BENCHMARKS YET
The SMOKE dataset is a dataset for fog/smoke removal. There are 110 self-collected fog/smoke images and their clean pairs. There are 12 other pairs of fog data for evaluation.