The Dense Material Segmentation Dataset (DMS) consists of 3 million polygon labels of material categories (metal, wood, glass, etc) for 44 thousand RGB images. The dataset is described in the research paper, A Dense Material Segmentation Dataset for Indoor and Outdoor Scene Parsing.
0 PAPER • NO BENCHMARKS YET
Video class agnostic segmentation (VCAS) is the task of segmenting objects without regards to its semantics combining appearance, motion and geometry from monocular video sequences. The main motivation behind this is to account for unknown objects in the scene and to act as a redundant signal along with the segmentation of known classes for better safety as shown in the following
1 PAPER • NO BENCHMARKS YET
We propose a new benchmark called Human Video Instance Segmentation (HVIS), which focuses on complex real-world scenarios with sufficient human instance masks and identities.
CaDIS: a Cataract Dataset for Image Segmentation is a dataset for semantic segmentation created by Digital Surgery Ltd. on top of the CATARACTS dataset.
7 PAPERS • 3 BENCHMARKS
LiTS17 is a liver tumor segmentation benchmark. The data and segmentations are provided by various clinical sites around the world.
38 PAPERS • 3 BENCHMARKS
The colorectal nuclear segmentation and phenotypes (CoNSeP) dataset consists of 41 H&E stained image tiles, each of size 1,000×1,000 pixels at 40× objective magnification.
51 PAPERS • 1 BENCHMARK
The directory HiCIS contains two datasets for instance segmentation of honeycombs in concrete in COCO Format.
HASCD (Human Activity Segmentation Challenge Dataset) contains 250 annotated multivariate time series capturing 10.7 h of real-world human motion smartphone sensor data from 15 bachelor computer science
Spine or vertebral segmentation is a crucial step in all applications regarding automated quantification of spinal morphology and pathology. The tasks evaluated for include: vertebral labelling and segmentation.
26 PAPERS • NO BENCHMARKS YET
This is the first general Underwater Image Instance Segmentation (UIIS) dataset containing 4,628 images for 7 categories with pixel-level annotations for underwater instance segmentation task
1 PAPER • 1 BENCHMARK
ODMS is a dataset for learning Object Depth via Motion and Segmentation. ODMS training data are configurable and extensible, with each training example consisting of a series of object segmentation masks, camera movement distances, and ground truth object depth.
2 PAPERS • NO BENCHMARKS YET
…There are two major challenges to allowing such an attractive learning modality for segmentation tasks: i) a large-scale benchmark for assessing algorithms is missing; ii) unsupervised shape representation We propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to track the research progress. Based on the ImageNet dataset, we propose the ImageNet-S dataset with 1.2 million training images and 50k high-quality semantic segmentation annotations for evaluation.
31 PAPERS • 6 BENCHMARKS
SegTHOR (Segmentation of THoracic Organs at Risk) is a dataset dedicated to the segmentation of organs at risk (OARs) in the thorax, i.e. the organs surrounding the tumour that must be preserved from irradiations
22 PAPERS • NO BENCHMARKS YET
The SWIMSEG dataset contains 1013 images of sky/cloud patches, along with their corresponding binary segmentation maps.
6 PAPERS • 1 BENCHMARK
To build the highly accurate Dichotomous Image Segmentation dataset (DIS5K), we first manually collected over 12,000 images from Flickr1 based on our pre-designed keywords.
29 PAPERS • 5 BENCHMARKS
The Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is a dataset for motion segmentation, which extends the BMS-26 dataset with 33 additional video sequences.
17 PAPERS • 2 BENCHMARKS
The Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames is annotated. It has pixel-accurate segmentation annotations of moving objects. FBMS-59 comes with a split into a training set and a test set.
118 PAPERS • 3 BENCHMARKS
…These data can be used in several ways to develop and validate algorithms for nuclear detection, classification, and segmentation, or as a resource to develop and evaluate methods for interrater analysis For multi-rater datasets we provide annotations generated with and without suggestions from weak segmentation and classification algorithms.
7 PAPERS • NO BENCHMARKS YET
To reveal and systematically investigate the effectiveness of the proposed method in the real world, a real low-light image dataset for instance segmentation is necessary and urgently needed. Considering there is no suitable dataset, therefore, we collect and annotate a Low-light Instance Segmentation (LIS) dataset using a Canon EOS 5D Mark IV camera.
Embrapa Wine Grape Instance Segmentation Dataset (WGISD) contains grape clusters properly annotated in 300 images and a novel annotation methodology for segmentation of complex objects in natural images
5 PAPERS • NO BENCHMARKS YET
We introduce a large-scale image dataset EasyPortrait for portrait segmentation and face parsing. Segmentation masks were created from polygons for each annotation.
Semantic segmentation of drone images is critical for various aerial vision tasks as it provides essential seman- tic details to understand scenes on the ground. Ensuring high accuracy of semantic segmentation models for drones requires access to diverse, large-scale, and high-resolution datasets, which are often scarce in the field of aerial image processing.
The SWINSEG dataset contains 115 nighttime images of sky/cloud patches along with their corresponding binary ground truth maps. The ground truth annotation was done in consultation with experts from Singapore Meteorological Services. All images were captured in Singapore using WAHRSIS, a calibrated ground-based whole sky imager, over a period of 12 months from January to December 2016. All image patches are 500x500 pixels in size, and were selected considering several factors such as time of the image capture, cloud coverage, and seasonal variations.
5 PAPERS • 1 BENCHMARK
The dataset used in this challenge consists of 165 images derived from 16 H&E stained histological sections of stage T3 or T42 colorectal adenocarcinoma. Each section belongs to a different patient, and sections were processed in the laboratory on different occasions. Thus, the dataset exhibits high inter-subject variability in both stain distribution and tissue architecture. The digitization of these histological sections into whole-slide images (WSIs) was accomplished using a Zeiss MIRAX MIDI Slide Scanner with a pixel resolution of 0.465µm.
83 PAPERS • 1 BENCHMARK
…The images in the SWINySeg dataset are taken from two of our earlier sky/cloud image segmentation datasets -- SWIMSEG and SWINSEG.
3 PAPERS • 1 BENCHMARK
Synthetic training dataset of 50,000 depth images and 320,000 object masks using simulated heaps of 3D CAD models.
PASTIS is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite image time series.
9 PAPERS • 2 BENCHMARKS
The 2021 Kidney and Kidney Tumor Segmentation challenge (abbreviated KiTS21) is a competition in which teams compete to develop the best system for automatic semantic segmentation of renal tumors and surrounding The 2021 Kidney and Kidney Tumor Segmentation Challenge The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge
7 PAPERS • 1 BENCHMARK
The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.
3 PAPERS • NO BENCHMARKS YET
…Partial and Unusual Masks for Video Object Segmentation (PUMaVOS) dataset has the following properties: - 24 videos, 21187 densely-annotated frames; - Covers complex practical use cases such as object
…In moving object segmentation of point cloud sequences, one has to provide motion labels for each point of the test sequences 11-21. We map all moving-x classes of the original SemanticKITTI semantic segmentation benchmark to a single moving object class. Citation Citation. More information on the task and the metric, you can find in our publication related to the task: @article{chen2021ral, title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach
6 PAPERS • NO BENCHMARKS YET
MEIS comprises a total of 2,639 images in the size of 1024 × 768 toward two recording views (Aortic Valve (AV) and Left Ventricle (LV)) with 1,521 (747 in AV + 774 in LV) images for training and 1,118 (559 in AV + 559 in LV) for testing, respectively. Each view must be detected with two objects to calculate the measurement indicators. That is in total with four object classes (two objects in each view): aortic root (AoR) and left atrium (LA) in AV; interventricular septum (IVS) and left ventricular posterior wall (LVPW) in LV. The medical meaning and purpose of each indicator are listed in the following: • AV: LA-Dimension and AoR-Dimension can be measured for calculating different indicators, such as AoR/LA ratio, to examine the state of the aortic valve. • LV: 6 measurements include IVSs, IVSd, LVIDs, LVIDd, LVPWs, and LVPWd. These concerned thicknesses and dimensions in LV recording are used to estimate other cardiac functions through specific medical formulas, including LV mass, LV
1 PAPER • 2 BENCHMARKS
Extension of the official KITTI'15 dataset. independently moving instance segmentation ground truth to cover all moving objects, not just a selection of cars and vans. Instance Motion Segmentation of all moving objects Binary Motion Segmentation (background/foreground) Validation Masks Dataset contains: Instance Motion Segmenation for the training split of the KITTI
Egocentric Dataset of the University of Barcelona – Segmentation (EDUB-Seg) is a dataset for egocentric event segmentation acquired by the Narrative Clip, which takes a picture every 30 seconds.
4 PAPERS • NO BENCHMARKS YET
This project aims to provide all the materials to the community to resolve the problem of echocardiographic image segmentation and volume estimation from 2D ultrasound sequences (both two and four-chamber This platform aims to assess in a reproducible manner the performance of methods for segmenting cardiac structures (left ventricle endocardium and epicardium and left atrium borders) and extracting clinical
54 PAPERS • NO BENCHMARKS YET
This dataset inclue multi-spectral acquisition of vegetation for the conception of new DeepIndices. The images were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. The dataset were acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR) and in Dijon (Burgundy, France, at 47°18'32.5"N 5°04'01.8"E) within the site of AgroSup Dijon. Images of bean and corn, containing various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) were acquired in top-down view at 1.8 meter from the ground. (2020-05-01)
dacl10k stands for damage classification 10k images and is a multi-label semantic segmentation dataset for 19 classes (13 damages and 6 objects) present on bridges.
Extension of the PASTIS benchmark with radar and optical image time series.
3 PAPERS • 2 BENCHMARKS
…We recommend to use Multi Atlas Segmentation and Morphometric analysis toolkit (MASMAT) for mouse brain MRI along with other mouse brain atlases in this repo.
The Instance Segmentation task, an extension of the well-known Object Detection task, is of great help in many areas, such as precision agriculture: being able to automatically identify plant organs and address the problem related to early disease detection and diagnosis on vines plants, a new dataset has been created with the goal of advancing the state-of-the-art of diseases recognition via instance segmentation Preliminary results for the object detection and instance segmentation tasks reached by the models Mask R-CNN and R^3-CNN are provided as baseline, demonstrating that the procedure is able to reach promising
EgoHOS is a labeled dataset consisting of 11243 egocentric images with per-pixel segmentation labels of hands and objects being interacted with during a diverse array of daily activities.
CHAOS challenge aims the segmentation of abdominal organs (liver, kidneys and spleen) from CT and MRI data. CHAOS tasks contain combination of these organs' segmentation. " 1 and it is simply based on using a single system, which can segment liver from both CT and MRI. is mostly a regular task of liver segmentation from CT, (such as SLIVER07). segmentation from MRI.
8 PAPERS • NO BENCHMARKS YET
The Multi-Object and Segmentation (MOTS) benchmark 2 consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation 2012 and extends the annotations to the Multi-Object and Segmentation (MOTS) task. To this end, we added dense pixel-wise segmentation labels for every object. We evaluate submitted results using the metrics HOTA, CLEAR MOT, and MT/PT/ML. We rank methods by HOTA 1. (adapted for the segmentation case). Evaluation is performed using the code from the TrackEval repository. 1 J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.
26 PAPERS • 1 BENCHMARK
…If you use this dataset in your work, please consider to cite: @inproceedings{ulucan2020large, title={A Large-Scale Dataset for Fish Segmentation and Classification}, author={Ulucan, Oguzhan and Karakaya This dataset was collected in order to carry out segmentation, feature extraction, and classification tasks and compare the common segmentation, feature extraction, and classification algorithms (Semantic Segmentation, Convolutional Neural Networks, Bag of Features).
3,859 high-resolution YouTube videos, 2,985 training videos, 421 validation videos and 453 test videos. An improved 40-category label set by merging eagle and owl into bird, ape into monkey, deleting hands, and adding flying disc, squirrel and whale 8,171 unique video instances 232k high-quality manual annotations
44 PAPERS • 1 BENCHMARK
DOORS is a dataset designed for boulders recognition, centroid regression, segmentation, and navigation applications. It can be used to perform navigation, boulder recognition, segmentation, and centroid regression. Segmentation: Contain images, masks, and labels of 2 datasets: DS1 and DS2. DS1 is made of the same images of the Regression dataset but is specifically designed for segmentation.
The goal of the challenge is to compare automated algorithms that are able to detect and segment various types of fluids on a common dataset of optical coherence tomography (OCT) volumes representing different We invite the medical imaging community to participate by developing and testing existing and novel automated retinal OCT segmentation methods.