Mila Simulated Floods Dataset is a 1.5 square km virtual world using the Unity3D game engine including urban, suburban and rural areas.
2 PAPERS • 1 BENCHMARK
Quality, diversity, and size of training dataset are critical factors for learning-based gaze estimators. We create two datasets satisfying these criteria for near-eye gaze estimation under infrared illumination: a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions (two million images at 1280x960), and a real-world dataset collected with 35 subjects (2.5 million images at 640x480). Using our datasets, we train a neural network for gaze estimation, achieving 2.06 (+/- 0.44) degrees of accuracy across a wide 30 x 40 degrees field of view on real subjects excluded from training and 0.5 degrees best-case accuracy (across the same field of view) when explicitly trained for one real subject. We also train a variant of our network to perform pupil estimation, showing higher robustness than previous methods. Our network requires fewer convolutional layers than previous networks, ach
2 PAPERS • NO BENCHMARKS YET
An experimental and synthetic (simulated) OA raw signals and reconstructed image domain datasets rendered with different experimental parameters and tomographic acquisition geometries.
ODMS is a dataset for learning Object Depth via Motion and Segmentation. ODMS training data are configurable and extensible, with each training example consisting of a series of object segmentation masks, camera movement distances, and ground truth object depth. As a benchmark evaluation, the dataset provides four ODMS validation and test sets with 15,650 examples in multiple domains, including robotics and driving.
OpenEDS2020 is a dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display mounted with two synchronized eye-facing cameras. The dataset, which is anonymized to remove any personally identifiable information on participants, consists of 80 participants of varied appearance performing several gaze-elicited tasks, and is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560 sequences containing 550,400 eye-images and respective gaze vectors, created to foster research in spatio-temporal gaze estimation and prediction approaches; and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz, with up to 29,500 images, of which 5% contain a semantic segmentation label, devised to encourage the use of temporal information to propagate labels to contiguous frames.
The PAX-Ray++ dataset uses pseudo-labeled thorax CTs to enable the segmentation of anatomy in Chest X-Rays. By projecting the CTs to a 2D plane, we gather fine-grained annotated imaages resembling radiographs. It contains 7,377 frontal and lateral view images each with 157 anatomy classes and over 2 million annotated instances.
A new benchmark dataset of webcam images, Photi-LakeIce, from multiple cameras and two different winters, along with pixel-wise ground truth annotations.
The Retinal Microsurgery dataset is a dataset for surgical instrument tracking. It consists of 18 in-vivo sequences, each with 200 frames of resolution 1920 × 1080 pixels. The dataset is further classified into four instrument-dependent subsets. The annotated tool joints are n=3 and semantic classes c=2 (tool and background).
A special scene-graph for intelligent vehicles. Different to classical data representation, this graph provides not only object proposals but also their pair-wise relationships. By organizing them in a topological graph, these data are explainable, fully-connected, and could be easily processed by GCNs (Graph Convolutional Networks).
Video sequences captured at a field on Campus Kleinaltendorf (CKA), University of Bonn, captured by BonBot-I, an autonomous weeding robot. The data was captured by mounting an Intel RealSense D435i sensor with a nadir view of the ground.
TOP is a synthetic dataset for topology optimization generated using Topy. The generated dataset has 10,000 objects which consist on 100 iterations of the optimization process for the problem defined on a regular 40 x 40 grid.
Semantic segmentation of drone images is critical for various aerial vision tasks as it provides essential seman- tic details to understand scenes on the ground. Ensuring high accuracy of semantic segmentation models for drones requires access to diverse, large-scale, and high-resolution datasets, which are often scarce in the field of aerial image processing. While existing datasets typically focus on urban scenes and are relatively small, our Varied Drone Dataset (VDD) addresses these limitations by offering a large-scale, densely labeled collection of 400 high-resolution images spanning 7 classes. This dataset features various scenes in urban, industrial, rural, and natural areas, captured from different camera angles and under diverse lighting conditions.
The Vistas-NP dataset is an out-of-distribution detection dataset based on the Mapillary Vistas dataset. The original Vistas dataset consists of 18,000 training images and 2,000 validation images with 66 classes. In Vistas-NP the human classes are used as outliers due to their dispersion across scenes and visual diversity from other objects. The dataset is created by excluding all images with class person and the three rider classes to the test subset. Consequently, the dataset has 8,003 train images and 830 validation images. The test set contains 11,167.
ZeroWaste is a dataset for automatic waste detection and segmentation. This dataset contains over 1,800 fully segmented video frames collected from a real waste sorting plant along with waste material labels for training and evaluation of the segmentation methods, as well as over 6,000 unlabeled frames that can be further used for semi-supervised and self-supervised learning techniques. ZeroWaste also provides frames of the conveyor belt before and after the sorting process, comprising a novel setup that can be used for weakly-supervised segmentation.
The dataset consists of images of 158 filled out bank checks containing various complex backgrounds, and handwritten text and signatures in the respective fields, along with both pixel-level and patch-level segmentation masks for the signatures on the checks. Please visit the dataset homepage for more details.
1 PAPER • NO BENCHMARKS YET
The dataset contains 73 satellite images of different forests damaged by wildfires across Europe with a resolution of up to 10m per pixel. Data were collected from the Sentinel-2 L2A satellite mission and the target labels were generated from the Copernicus Emergency Management Service (EMS) annotations, with five different severity levels, ranging from undamaged to completely destroyed.
1 PAPER • 1 BENCHMARK
Chinese Character Stroke Extraction (CCSE) is a benchmark containing two large-scale datasets: Kaiti CCSE (CCSE-Kai) and Handwritten CCSE (CCSE-HW). It is designed for stroke extraction problems.
CheXlocalize is a radiologist-annotated segmentation dataset on chest X-rays. The dataset consists of two types of radiologist annotations for the localization of 10 pathologies: pixel-level segmentations and most-representative points. Annotations were drawn on images from the CheXpert validation and test sets. The dataset also consists of two separate sets of radiologist annotations: (1) ground-truth pixel-level segmentations on the validation and test sets, drawn by two board-certified radiologists, and (2) benchmark pixel-level segmentations and most-representative points on the test set, drawn by a separate group of three board-certified radiologists.
Ciona17 is a semantic segmentation dataset with pixel-level annotations pertaining to invasive species in a marine environment. Diverse outdoor illumination, a range of object shapes, colour, and severe occlusion provide a significant real world challenge for the computer vision community.
CongNaMul Dataset
From DroneDeploy:
EBHI-Seg is a dataset containing 5,170 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer.
In EMDS-6, there are 21 classes of environmental microorganisms (EMs). In each calss, there are 40 EM original images and their corresponding binary groud truth images. In ground truth images, the foreground is white and background is black.
The French National Institute of Geographical and Forest Information (IGN) has the mission to document and measure land-cover on French territory and provides referential geographical datasets, including high-resolution aerial images and topographic maps. The monitoring of land-cover plays a crucial role in land management and planning initiatives, which can have significant socio-economic and environmental impact. Together with remote sensing technologies, artificial intelligence (IA) promises to become a powerful tool in determining land-cover and its evolution. IGN is currently exploring the potential of IA in the production of high-resolution land cover maps. Notably, deep learning methods are employed to obtain a semantic segmentation of aerial images. However, territories as large as France imply heterogeneous contexts: variations in landscapes and image acquisition make it challenging to provide uniform, reliable and accurate results across all of France.
A benchmark for detecting fallen people lying on the floor. It consists of 6982 images, with a total of 5023 falls and 2275 non falls corresponding to people in conventional situations (standing up, sitting, lying on the sofa or bed, walking, etc). Almost all the images have been captured in indoor environments with very different situations: variation of poses and sizes, occlusions, lighting changes, etc.
FractureAtlas is a musculoskeletal bone fracture dataset with annotations for deep learning tasks like classification, localization, and segmentation. The dataset contains a total of 4,083 X-Ray images with annotation in COCO, VGG, YOLO, and Pascal VOC format. This dataset is made freely available for any purpose. The data provided within this work are free to copy, share or redistribute in any medium or format. The data might be adapted, remixed, transformed, and built upon. The dataset is licensed under a CC-BY 4.0 license. It should be noted that to use the dataset correctly, one needs to have knowledge of medical and radiology fields to understand the results and make conclusions based on the dataset. It's also important to consider the possibility of labeling errors.
Freiburg Terrains consist of three parts: 3.7 hours of audio recordings of the microphone pointed at the robot wheels. It also contains 24K RGB images from the camera mounted on top of the robot. The dataset creators also provide the SLAM poses for each data collection run. The dataset can be used for terrain classification which is useful for agent navigation tasks.
We provide all the expected data inputs to GUISS such as meshes, texture images, and blend files. Generated datasets used in our experiments along with the stereo depth estimations can be downloaded. We have defined seven dataset types: scene_reconstructions, texture_variation, gaea_texture_variation, generative_texture, terrain_variation, rocks, and generative_texture_snow. Each dataset type contains renderings with varying values of different parameters such as lighting angle, texture imgs, albedo, etc. Position each dataset type folder under data/dataset/.
HASCD (Human Activity Segmentation Challenge Dataset) contains 250 annotated multivariate time series capturing 10.7 h of real-world human motion smartphone sensor data from 15 bachelor computer science students. The recordings capture 6 distinct human motion sequences designed to represent pervasive behaviour in realistic indoor and outdoor settings. The data set serves as a benchmark for evaluating machine learning workflows.
The image set contains 180 high-resolution color microscopic images of human duodenum adenocarcinoma HuTu 80 cell populations obtained in an in vitro scratch assay (for the details of the experimental protocol, we refer to (Liang et al., 2007)). Briefly, cells were seeded in 12-well culture plates ($20 \times 10^3$ cells per well) and grown to form a monolayer with 85\% or more confluency. Then the cell monolayer was scraped in a straight line using a pipette tip ($200 \mu L$). The debris was removed by washing with a growth medium and the medium in wells was replaced. The scratch areas were marked to obtain the same field during the image acquisition. Images of the scratches were captured immediately following the scratch formation, as well as after 24, 48 and 72 h of cultivation.
Semantic segmentation of drone images is critical for various aerial vision tasks as it provides essential semantic details to understand scenes on the ground. Ensuring high accuracy of semantic segmentation models for drones requires access to diverse, large-scale, and high-resolution datasets, which are often scarce in the field of aerial image processing. While existing datasets typically focus on urban scenes and are relatively small, our Varied Drone Dataset (VDD) addresses these limitations by offering a large-scale, densely labeled collection of 400 high-resolution images spanning 7 classes. This dataset features various scenes in urban, industrial, rural, and natural areas, captured from different camera angles and under diverse lighting conditions. We also make new annotations to UDD and UAVid, integrating them under VDD annotation standards, to create the Integrated Drone Dataset (IDD). It's expected that our dataset will generate considerable interest in drone image segmenta
The LIB-HSI dataset contains hyperspectral reflectance images and their corresponding RGB images of building façades in a light industrial environment. The dataset also contains pixel-level annotated images for each hyperspectral/RGB image. The LIB-HSI dataset was created to develop deep learning methods for segmenting building facade materials.
Usually, the information related to the crop types available in a given territory is annual information, that is, we only know the type of main crop grown over a year and we do not know any crops that have followed one another during the year and also we do not know when a particular crop is sown and when it is harvested. The main objective of this dataset is to create the basis for experimenting with suitable solutions to give a reliable answer to the above questions, or to propose models capable of producing dynamic segmentation maps that show when a crop begins to grow and when it is collected. Consequently, being able to understand if more than one crop has been grown in a territory within a year. In this dataset, we have 20 coverage classes as ground-truth values provided by Regine Lombardia. The mapping of the class labels used (see file lombardia-classes/classes25pc.txt) brings together some classes and provides the time intervals within which that category grows. The last two c
The Multiple Light Source dataset (MLS) is a collection of 24 multiple object scenes each recorded under 18 multiple light source illumination scenarios. The illuminants are varying in dominant spectral colours, intensity and distance from the scene. The dataset can be used for the evaluation of computational colour constancy algorithms. Along with the images of the scenes the spectral characteristics of the camera, light sources and the objects are also provided, and each image includes pixel-by-pixel ground truth annotation of uniformly coloured object surfaces thus making this useful for benchmarking colour-based image segmentation algorithms.
MOSAD (Mobile Sensing Human Activity Data Set) is a multi-modal, annotated time series (TS) data set that contains 14 recordings of 9 triaxial smartphone sensor measurements (126 TS) from 6 human subjects performing (in part) 3 motion sequences in different locations. The aim of the data set is to facilitate the study of human behaviour and the design of TS data mining technology to separate individual activities using low-cost sensors in wearable devices.
MTNeuro is a multi-task neuroimaging benchmark built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions.
The Multi-Spectral Imaging via Computed Tomography (MUSIC) dataset is a two-part (2D- and 3D spectral) open access dataset for advanced image analysis of spectral radiographic (x-ray) scans, their tomographic reconstruction and the detection of specific materials within such scans. The scans operate at a photon energy range of around 20 keV up to 160 keV.
Meta Omnium is a dataset-of-datasets spanning multiple vision tasks including recognition, keypoint localization, semantic segmentation and regression. Meta Omnium enables meta-learning researchers to evaluate model generalization to a much wider array of tasks than previously possible, and provides a single framework for evaluating meta-learners across a wide suite of vision applications in a consistent manner.
MixedWM38 Dataset(WaferMap) has more than 38000 wafer maps, including 1 normal pattern, 8 single defect patterns, and 29 mixed defect patterns, a total of 38 defect patterns.
1 PAPER • 2 BENCHMARKS
This dataset consists of four sets of flower images, from three different species: apple, peach, and pear, and accompanying ground truth images. The images were acquired under a range of imaging conditions. These datasets support work in an accompanying paper that demonstrates a flower identification algorithm that is robust to uncontrolled environments and applicable to different flower species. While this data is primarily provided to support that paper, other researchers interested in flower detection may also use the dataset to develop new algorithms. Flower detection is a problem of interest in orchard crops because it is related to management of fruit load.
Multispectral and HD vineyard orthomosaics from central Portugal
OmniCity is a dataset for omnipotent city understanding from multi-level and multi-view images. It contains multi-view satellite images as well as street-level panorama and mono-view images, constituting over 100K pixel-wise annotated images that are well-aligned and collected from 25K geo-locations in New York City. This dataset introduces a new task of fine-grained building instance segmentation on street-level panorama images. It also provides new problem settings for existing tasks, such as cross-view image matching, synthesis, segmentation, detection, etc., and facilitates the developing of new methods for large-scale city understanding, reconstruction, and simulation.
The Person In Context (PIC) dataset is a dataset for human-centric relation segmentation (HRS), which contains 17,122 high-resolution images and densely annotated entity segmentation and relations, including 141 object categories, 23 relation categories and 25 semantic human parts.
Panoramic Video Panoptic Segmentation Dataset is a large-scale dataset that offers high-quality panoptic segmentation labels for autonomous driving. The dataset has labels for 28 semantic categories and 2,860 temporal sequences that were captured by five cameras mounted on autonomous vehicles driving in three different geographical locations, leading to a total of 100k labeled camera images.
This dataset was built with data acquired at the Hospital Clinic of Barcelona, Spain. It is composed of a total of 1126 HD polyp images. There are a total of 473 unique polyps, with a variable number of different shots per polyp (minimum: 2, maximum: 24, median: 10). Special attention was paid to ensure that images from the same polyp show different conditions. An external frame-grabber and a white light endoscope were used to capture raw images. The dataset contains images with two different resolutions: 1920 x 1080 and 1350 x 1080.
This dataset for the semantic segmentation of potholes and cracks on the road surface was assembled from 5 other datasets already publicly available, plus a very small addition of segmented images on our part. To speed up the labeling operations, we started working with depth cameras to try to automate, to some extent, this extremely time-consuming phase.
A Video Dataset for Visual Perception and Autonomous Navigation in Unstructured Environments. Website: http://rugd.vision/
Risk-Aware Planning is a dataset that contains the overhead images and their semantic segmentation captured by a drone from the CityEnviron environment in AirSim simulator.