This dataset inclue multi-spectral acquisition of vegetation for the conception of new DeepIndices. The images were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. The dataset were acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR) and in Dijon (Burgundy, France, at 47°18'32.5"N 5°04'01.8"E) within the site of AgroSup Dijon. Images of bean and corn, containing various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) were acquired in top-down view at 1.8 meter from the ground. (2020-05-01)
1 PAPER • 1 BENCHMARK
dacl10k stands for damage classification 10k images and is a multi-label semantic segmentation dataset for 19 classes (13 damages and 6 objects) present on bridges.
1 PAPER • NO BENCHMARKS YET
The Instance Segmentation task, an extension of the well-known Object Detection task, is of great help in many areas, such as precision agriculture: being able to automatically identify plant organs and address the problem related to early disease detection and diagnosis on vines plants, a new dataset has been created with the goal of advancing the state-of-the-art of diseases recognition via instance segmentation Preliminary results for the object detection and instance segmentation tasks reached by the models Mask R-CNN and R^3-CNN are provided as baseline, demonstrating that the procedure is able to reach promising
1 PAPER • 2 BENCHMARKS
…We recommend to use Multi Atlas Segmentation and Morphometric analysis toolkit (MASMAT) for mouse brain MRI along with other mouse brain atlases in this repo.
2 PAPERS • NO BENCHMARKS YET
EgoHOS is a labeled dataset consisting of 11243 egocentric images with per-pixel segmentation labels of hands and objects being interacted with during a diverse array of daily activities.
3 PAPERS • NO BENCHMARKS YET
The Multi-Object and Segmentation (MOTS) benchmark 2 consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation 2012 and extends the annotations to the Multi-Object and Segmentation (MOTS) task. To this end, we added dense pixel-wise segmentation labels for every object. We evaluate submitted results using the metrics HOTA, CLEAR MOT, and MT/PT/ML. We rank methods by HOTA 1. (adapted for the segmentation case). Evaluation is performed using the code from the TrackEval repository. 1 J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.
26 PAPERS • 1 BENCHMARK
…If you use this dataset in your work, please consider to cite: @inproceedings{ulucan2020large, title={A Large-Scale Dataset for Fish Segmentation and Classification}, author={Ulucan, Oguzhan and Karakaya This dataset was collected in order to carry out segmentation, feature extraction, and classification tasks and compare the common segmentation, feature extraction, and classification algorithms (Semantic Segmentation, Convolutional Neural Networks, Bag of Features).
DOORS is a dataset designed for boulders recognition, centroid regression, segmentation, and navigation applications. It can be used to perform navigation, boulder recognition, segmentation, and centroid regression. Segmentation: Contain images, masks, and labels of 2 datasets: DS1 and DS2. DS1 is made of the same images of the Regression dataset but is specifically designed for segmentation.
FractureAtlas is a musculoskeletal bone fracture dataset with annotations for deep learning tasks like classification, localization, and segmentation.
5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan Focus on different geographical environments between Urban and Rural Advance both semantic segmentation and domain adaptation tasks Three considerable challenges: Multi-scale objects Complex background samples Inconsistent class distributions Two contests are held on the Codalab: <b>LoveDA Semantic Segmentation
48 PAPERS • 1 BENCHMARK
Click to add a brief description of the dataset (Markdown and LaTeX enabled).
4 PAPERS • NO BENCHMARKS YET
MatSeg Dataset for Zero-Shot Material States Segmentation: The dataset contains large-scale synthetic images for training data and highly diverse real-world image benchmarks for testing. Focusing on zero-shot class-agnostic segmentation of materials and their states. This means finding the region of materials states without pre-training on the specific material classes or states. It contains both hard segmentation maps and soft and partial similarity annotations for similar but not identical materials.
UVO is a new benchmark for open-world class-agnostic object segmentation in videos.
23 PAPERS • 3 BENCHMARKS
…For more details, please refer to ACCT is a fast and accessible automatic cell counting tool using machine learning for 2D image segmentation.
A Sentinel-2 based time series multi country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. It is constructed to cover the period 2016-2020 for Catalonia and France, while it can be extended to include additional countries. Currently, it contains 42.5 million parcels, which makes it significantly larger than other available archives.
SAMRS is a remote sensing segmentation dataset which provides object category, location, and instance information that can be used for semantic segmentation, instance segmentation, and object detection
IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. reconstruction. 103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics). 1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis. 116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination. Geodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels
25 PAPERS • 2 BENCHMARKS
Standardized Multi-Channel Dataset for Glaucoma (SMDG-19) is a collection and standardization of 19 public datasets, comprised of full-fundus glaucoma images, associated image metadata like, optic disc segmentation , optic cup segmentation, blood vessel segmentation, and any provided per-instance text metadata like sex and age.
0 PAPER • NO BENCHMARKS YET
…It annotates inter-segment relations based on COCO panoptic segmentation.
20 PAPERS • 1 BENCHMARK
Panoptic nuScenes is a benchmark dataset that extends the popular nuScenes dataset with point-wise groundtruth annotations for semantic segmentation, panoptic segmentation, and panoptic tracking tasks.
5 PAPERS • NO BENCHMARKS YET
ZeroWaste is a dataset for automatic waste detection and segmentation. This dataset contains over 1,800 fully segmented video frames collected from a real waste sorting plant along with waste material labels for training and evaluation of the segmentation methods, as well ZeroWaste also provides frames of the conveyor belt before and after the sorting process, comprising a novel setup that can be used for weakly-supervised segmentation.
The Segmenting and Tracking Every Pixel (STEP) benchmark consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation and the Multi-Object Tracking and Segmentation (MOTS) benchmark. This benchmark extends the annotations to the Segmenting and Tracking Every Pixel (STEP) task. [Copy-pasted from http://www.cvlibs.net/datasets/kitti/eval_step.php]
20 PAPERS • 2 BENCHMARKS
An instance segmentation dataset of yeast cells in microstructures. The dataset includes 493 densely annotated microscopy images. For more information see the paper "An Instance Segmentation Dataset of Yeast Cells in Microstructures".
CAMO++ is a dataset for camouflaged object segmentation. This dataset increases the number of images with hierarchical pixel-wise ground-truths. The authors also provide a benchmark suite for the task of camouflaged instance segmentation.
6 PAPERS • NO BENCHMARKS YET
…The database aggregates 657,566 anatomical segmentation masks derived from images which have been processed using the HybridGNet model to ensure consistent, high-quality segmentation. To confirm the quality of the segmentations, we include in this database individual Reverse Classification Accuracy (RCA) scores for each of the segmentation masks.
The fetoscopy placenta dataset is associated with our MICCAI2020 publication titled “Deep Placental Vessel Segmentation for Fetoscopic Mosaicking”. The dataset contains 483 frames with ground-truth vessel segmentation annotations taken from six different in vivo fetoscopic procedure videos. The dataset also includes six unannotated in vivo continuous fetoscopic video clips (950 frames) with predicted vessel segmentation maps obtained from the leave-one-out cross-validation of our method. We annotate a binary mask for vessel segmentation using the Pixel Annotation Tool.
The Person In Context (PIC) dataset is a dataset for human-centric relation segmentation (HRS), which contains 17,122 high-resolution images and densely annotated entity segmentation and relations, including
LASIESTA (Labeled and Annotated Sequences for Integral Evaluation of SegmenTation Algorithms) is a segmentation and detection dataset composed by many real indoor and outdoor sequences organized into categories
DanbooRegion is a dataset consists of 5377 in-the-wild illustration downloaded from the Danbooru2018 and region segment map annotation pairs samples are provided as at 1024px 8-bit RGB images, and region segment maps as int-32 index images.
…The Task 1 challenge dataset for lesion segmentation contains 2,000 images for training with ground truth segmentations (2000 binary mask images).
14 PAPERS • NO BENCHMARKS YET
PASCAL VOC 2011 is an image segmentation dataset. It contains around 2,223 images for training, consisting of 5,034 objects. Testing consists of 1,111 images with 2,028 objects. In total there are over 5,000 precisely segmented objects for training.
19 PAPERS • 2 BENCHMARKS
Fetoscopic Placental Vessel Segmentation and Registration (FetReg) is a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms
…Each reconstruction has clean dense geometry, high resolution and high dynamic range textures, glass and mirror surface information, planar segmentation as well as semantic class and instance segmentation
291 PAPERS • 3 BENCHMARKS
The ScanNet200 benchmark studies 200-class 3D semantic segmentation - an order of magnitude more class categories than previous 3D scene understanding benchmarks. The source of scene data is identical to ScanNet, but parses a larger vocabulary for semantic and instance segmentation
24 PAPERS • 3 BENCHMARKS
The Vocal Folds dataset is a dataset for automatic segmentation of laryngeal endoscopic images. The dataset consists of 8 sequences from 2 patients containing 536 hand segmented in vivo colour images of the larynx during two different resection interventions with a resolution of 512x512 pixels.
…The images are annotated by segmentation masks of the object(s) of interest. The original purpose of the data collection is for gesture-aware object-agnostic segmentation tasks.
BRATS 2014 is a brain tumor segmentation dataset.
5 PAPERS • 1 BENCHMARK
…The ACDC dataset contains cardiac MRI images, paired with hand-made segmentation masks. It is possible to use the segmentation masks provided in the ACDC dataset to evaluate the performance of methods trained using only scribble supervision. References: [1] Bernard, Olivier, et al. "Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?." IEEE transactions on medical imaging 37.11 (2018): 2514-2525.
9 PAPERS • 1 BENCHMARK
We design an all-day semantic segmentation benchmark all-day CityScapes. It is the first semantic segmentation benchmark that contains samples from all-day scenarios, i.e., from dawn to night.
3 PAPERS • 1 BENCHMARK
…The segmentation evaluation is based on three tasks: WT, TC and ET segmentation.
73 PAPERS • 1 BENCHMARK
MinneApple is a benchmark dataset for apple detection and segmentation. The fruits are labelled using polygonal masks for each object instance to aid in precise object detection, localization, and segmentation.
15 PAPERS • NO BENCHMARKS YET
…data includes synchronized and aligned samples of the following: angle of linear polarization (AoLP) images, degree of linear polarization (DoLP) images, RGB images, lidar scans, ground truth free space segmentation (road segmentation), GNSS / IMU readings (vehicle location, vehicle orientation, vehicle speed, vehicle acceleration, etc.) and calibration matrices. Additionally, the dataset includes free space segmentation of 8,141 images.
…It is a concealed defect segmentation dataset from the five well-known defect segmentation databases. It contains five sub-databases: MVTecAD, NEU, CrackForest, KolektorSDD, and MagneticTile.
The BraTS 2015 dataset is a dataset for brain tumor image segmentation. It consists of 220 high grade gliomas (HGG) and 54 low grade gliomas (LGG) MRIs. Segmented “ground truth” is provide about four intra-tumoral classes, viz. edema, enhancing tumor, non-enhancing tumor, and necrosis.
66 PAPERS • 1 BENCHMARK
This dataset contains pre and post destruction images and also segmentation labels for test images.
The York Urban Line Segment Database is a compilation of 102 images (45 indoor, 57 outdoor) of urban environments consisting mostly of scenes from the campus of York University and downtown Toronto, Canada Each image in the database has been hand-labelled to identify the set of line segments satisfying the “Manhattan assumption” (Coughlan & Yuille 2003), i.e., the set of line segments that conform to the The database provides the original images, camera calibration parameters, ground truth line segments, and estimated Manhattan frame relative to the camera for each image.
15 PAPERS • 2 BENCHMARKS
The Waymo Open Dataset currently contains 1,950 segments. The authors plan to grow this dataset in the future. Currently the datasets includes: 1,950 segments of 20s each, collected at 10Hz (390,000 frames) in diverse geographies and conditions Sensor data 1 mid-range lidar 4 short-range lidars 5 cameras (front data Lidar to camera projections Sensor calibrations and vehicle poses Labeled data Labels for 4 object classes - Vehicles, Pedestrians, Cyclists, Signs High-quality labels for lidar data in 1,200 segments 12.6M 3D bounding box labels with tracking IDs on lidar data High-quality labels for camera data in 1,000 segments 11.8M 2D bounding box labels with tracking IDs on camera data
380 PAPERS • 12 BENCHMARKS
CheXlocalize is a radiologist-annotated segmentation dataset on chest X-rays. The dataset consists of two types of radiologist annotations for the localization of 10 pathologies: pixel-level segmentations and most-representative points. The dataset also consists of two separate sets of radiologist annotations: (1) ground-truth pixel-level segmentations on the validation and test sets, drawn by two board-certified radiologists, and (2) benchmark pixel-level segmentations and most-representative points on the test set, drawn by a separate group of three board-certified radiologists.