PASCAL-S is a dataset for salient object detection consisting of a set of 850 images from PASCAL VOC 2010 validation set with multiple salient objects on the scenes.
227 PAPERS • 3 BENCHMARKS
DUTS is a saliency detection dataset containing 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenarios for saliency detection. Accurate pixel-level ground truths are manually annotated by 50 subjects.
192 PAPERS • 5 BENCHMARKS
HKU-IS is a visual saliency prediction dataset which contains 4447 challenging images, most of which have either low contrast or multiple salient objects.
181 PAPERS • 3 BENCHMARKS
The DUT-OMRON dataset is used for evaluation of Salient Object Detection task and it contains 5,168 high quality images. The images have one or more salient objects and relatively cluttered background.
169 PAPERS • 4 BENCHMARKS
SOC (Salient Objects in Clutter) is a dataset for Salient Object Detection (SOD). It includes images with salient and non-salient objects from daily object categories. Beyond object category annotations, each salient image is accompanied by attributes that reflect common challenges in real-world scenes.
26 PAPERS • 1 BENCHMARK
The Extended Complex Scene Saliency Dataset (ECSSD) is comprised of complex scenes, presenting textures and structures common to real-world images. ECSSD contains 1,000 intricate images and respective ground-truth saliency maps, created as an average of the labeling of five human participants.
20 PAPERS • 4 BENCHMARKS
Includes 5000 spatially aligned RGBT image pairs with ground truth annotations. VT5000 has 11 challenges collected in different scenes and environments for exploring the robustness of algorithms.
11 PAPERS • NO BENCHMARKS YET
The Extended Optical Remote Sensing Saliency Detection (EORSSD) dataset is an extension of the ORSSD dataset. This new dataset is larger and more varied than the original. It contains 2,000 images and corresponding pixel-wise ground truth, which includes many semantically meaningful but challenging images.
9 PAPERS • NO BENCHMARKS YET
There exist several datasets for saliency detection, but none of them is specifically designed for high-resolution salient object detection. High-Resolution Salient Object Detection (HRSOD) dataset, containing 1610 training images and 400 test images. The total 2010 images are collected from the website of Flickr with the license of all creative commons. Pixel-level ground truths are manually annotated by 40 subjects. The shortest edge of each image in HRSOD is more than 1200 pixels.
9 PAPERS • 1 BENCHMARK
Lytro Illum is a new light field dataset using a Lytro Illum camera. 640 light fields are collected with significant variations in terms of size, textureness, background clutter and illumination, etc. Micro-lens image arrays and central viewing images are generated, and corresponding ground-truth maps are produced.
6 PAPERS • NO BENCHMARKS YET
Aiming Detect small obstacles, like lost and found.
6 PAPERS • 2 BENCHMARKS
A set of realistic odd-one-out stimuli gathered "in the wild". Each image in the Odd-One-Out (O3) dataset depicts a scene with multiple objects similar to each other in appearance (distractors) and a singleton (target) distinct in one or more feature dimensions (e.g. color, shape, size). All images are resized so that the larger dimension is 1024px. Targets represent approx. 400 common object types such as flowers, sweets, chicken eggs, leaves, tiles and birds. Pixelwise masks are provided for targets and distractors. Annotations are generated using CVAT.
2 PAPERS • NO BENCHMARKS YET
A set of patterns used in psychophysical research to evaluate the ability of saliency algorithms to find targets distinct from distractors in orientation, color and size. Each image is a 7x7 grid and contains a single target. All images are 1024x1024px and have corresponding ground truth masks for the target and distractors.
2 PAPERS • NO BENCHMARKS YET