Test dataset for Semantic Segmentation. The datasets includes 500 RGB - images with the relative single-channel binary masks.
1 PAPER • NO BENCHMARKS YET
…The last task relates to automatcially segmenting polyps. Please cite "The EndoTect 2020 Challenge: Evaluation andComparison of Classification, Segmentation and Inference Time for Endoscopy" if you use the dataset.
2 PAPERS • 1 BENCHMARK
This dataset were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. And acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR). Images contains bean, with various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) The ground truth is defined for each images with polygons around leafs boundaries: In addition, each polygons are labeled into crop or weed. (2020-06-11)
0 PAPER • NO BENCHMARKS YET
The “Medico automatic polyp segmentation challenge” aims to develop computer-aided diagnosis systems for automatic polyp segmentation to detect all types of polyps (for example, irregular polyp, smaller The main goal of the challenge is to benchmark semantic segmentation algorithms on a publicly available dataset, emphasizing robustness, speed, and generalization. Medico Multimedia Task at MediaEval 2020:Automatic Polyp Segmentation (https://arxiv.org/pdf/2012.15244.pdf)
3 PAPERS • 1 BENCHMARK
…Nevertheless, this means, that an instance segmentation of all components and objects of interest into disjoint entities from the CT data is necessary. As of currently, no adequate computer-assisted tools for automated or semi-automated segmentation of such XXL-airplane data are available, in a first step, an interactive data annotation and object labelling
GAS (Grasp Area Segmentation) dataset consists of 10089 RGB images of cluttered scenes grouped into 1121 grasp-area segmentation tasks. For each RGB image we provide a binary segmentation map with the graspable and non-graspable regions for every object in the scene. For creating the GAS dataset we use the RGB images and corresponding ground truth segmentation masks from the GraspNet 1-Billion dataset.
Click to add a brief description of the dataset (Markdown and LaTeX enabled).
The Densely Annotation Video Segmentation dataset (DAVIS) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p.
633 PAPERS • 13 BENCHMARKS
Multimodal material segmentation (MCubeS) dataset contains 500 sets of images from 42 street scenes. The dataset provides annotated ground truth labels for both material and semantic segmentation for every pixel.
10 PAPERS • 1 BENCHMARK
…dataset consists of images of 158 filled out bank checks containing various complex backgrounds, and handwritten text and signatures in the respective fields, along with both pixel-level and patch-level segmentation “A Novel Segmentation Dataset for Signatures on Bank Checks.” ArXiv:2104.12203 [Cs], Apr. 2021. arXiv.org, http://arxiv.org/abs/2104.12203. Acknowledgements [1] P. Dansena, S. Bag, and R.
UV6K is a high-resolution remote sensing urban vehicle segmentation dataset. Images: 6,313 Vehicle: 245,141 Resolution: 0.1m Image Size: 1024x1024
1 PAPER • 1 BENCHMARK
HASCD (Human Activity Segmentation Challenge Dataset) contains 250 annotated multivariate time series capturing 10.7 h of real-world human motion smartphone sensor data from 15 bachelor computer science
This is the first general Underwater Image Instance Segmentation (UIIS) dataset containing 4,628 images for 7 categories with pixel-level annotations for underwater instance segmentation task
To reveal and systematically investigate the effectiveness of the proposed method in the real world, a real low-light image dataset for instance segmentation is necessary and urgently needed. Considering there is no suitable dataset, therefore, we collect and annotate a Low-light Instance Segmentation (LIS) dataset using a Canon EOS 5D Mark IV camera.
2 PAPERS • NO BENCHMARKS YET
We introduce a large-scale image dataset EasyPortrait for portrait segmentation and face parsing. Segmentation masks were created from polygons for each annotation.
The 2021 Kidney and Kidney Tumor Segmentation challenge (abbreviated KiTS21) is a competition in which teams compete to develop the best system for automatic semantic segmentation of renal tumors and surrounding The 2021 Kidney and Kidney Tumor Segmentation Challenge The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge
7 PAPERS • 1 BENCHMARK
…In moving object segmentation of point cloud sequences, one has to provide motion labels for each point of the test sequences 11-21. We map all moving-x classes of the original SemanticKITTI semantic segmentation benchmark to a single moving object class. Citation Citation. More information on the task and the metric, you can find in our publication related to the task: @article{chen2021ral, title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach
6 PAPERS • NO BENCHMARKS YET
Extension of the official KITTI'15 dataset. independently moving instance segmentation ground truth to cover all moving objects, not just a selection of cars and vans. Instance Motion Segmentation of all moving objects Binary Motion Segmentation (background/foreground) Validation Masks Dataset contains: Instance Motion Segmenation for the training split of the KITTI
This project aims to provide all the materials to the community to resolve the problem of echocardiographic image segmentation and volume estimation from 2D ultrasound sequences (both two and four-chamber This platform aims to assess in a reproducible manner the performance of methods for segmenting cardiac structures (left ventricle endocardium and epicardium and left atrium borders) and extracting clinical
54 PAPERS • NO BENCHMARKS YET
This dataset inclue multi-spectral acquisition of vegetation for the conception of new DeepIndices. The images were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. The dataset were acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR) and in Dijon (Burgundy, France, at 47°18'32.5"N 5°04'01.8"E) within the site of AgroSup Dijon. Images of bean and corn, containing various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) were acquired in top-down view at 1.8 meter from the ground. (2020-05-01)
…We recommend to use Multi Atlas Segmentation and Morphometric analysis toolkit (MASMAT) for mouse brain MRI along with other mouse brain atlases in this repo.
Extension of the PASTIS benchmark with radar and optical image time series.
3 PAPERS • 2 BENCHMARKS
CHAOS challenge aims the segmentation of abdominal organs (liver, kidneys and spleen) from CT and MRI data. CHAOS tasks contain combination of these organs' segmentation. " [1] and it is simply based on using a single system, which can segment liver from both CT and MRI. is mostly a regular task of liver segmentation from CT, (such as SLIVER07). segmentation from MRI.
8 PAPERS • NO BENCHMARKS YET
The Multi-Object and Segmentation (MOTS) benchmark [2] consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation 2012 and extends the annotations to the Multi-Object and Segmentation (MOTS) task. To this end, we added dense pixel-wise segmentation labels for every object. We evaluate submitted results using the metrics HOTA, CLEAR MOT, and MT/PT/ML. We rank methods by HOTA [1]. (adapted for the segmentation case). Evaluation is performed using the code from the TrackEval repository. [1] J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.
26 PAPERS • 1 BENCHMARK
…If you use this dataset in your work, please consider to cite: @inproceedings{ulucan2020large, title={A Large-Scale Dataset for Fish Segmentation and Classification}, author={Ulucan, Oguzhan and Karakaya This dataset was collected in order to carry out segmentation, feature extraction, and classification tasks and compare the common segmentation, feature extraction, and classification algorithms (Semantic Segmentation, Convolutional Neural Networks, Bag of Features).
FractureAtlas is a musculoskeletal bone fracture dataset with annotations for deep learning tasks like classification, localization, and segmentation.
The DISRPT 2019 workshop introduces the first iteration of a cross-formalism shared task on discourse unit segmentation. Since all major discourse parsing frameworks imply a segmentation of texts into segments, learning segmentations for and from diverse resources is a promising area for converging methods and insights. Because different corpora, languages and frameworks use different guidelines for segmentation, the shared task is meant to promote design of flexible methods for dealing with various guidelines, and help
4 PAPERS • NO BENCHMARKS YET
5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan Focus on different geographical environments between Urban and Rural Advance both semantic segmentation and domain adaptation tasks Three considerable challenges: Multi-scale objects Complex background samples Inconsistent class distributions Two contests are held on the Codalab: <b>LoveDA Semantic Segmentation
45 PAPERS • 1 BENCHMARK
MatSeg Dataset for Zero-Shot Material States Segmentation: The dataset contains large-scale synthetic images for training data and highly diverse real-world image benchmarks for testing. Focusing on zero-shot class-agnostic segmentation of materials and their states. This means finding the region of materials states without pre-training on the specific material classes or states. It contains both hard segmentation maps and soft and partial similarity annotations for similar but not identical materials.
…A precise three-dimensional spatial description, i.e. segmentation, of the target volumes as well as OARs is required for optimal radiation dose distribution calculation, which is primarily performed using Although attempts have been made towards the segmentation of OARs from MR images, so far there has been no evaluation of the impact the combined analysis of CT and MR images has on the segmentation of The Head and Neck Organ-at-Risk Multi-Modal Segmentation Challenge aims to promote the development of new and application of existing fully automated techniques for OAR segmentation in the HaN region from CT images that exploit the information of multiple imaging modalities so as to improve the accuracy of segmentation results.
5 PAPERS • NO BENCHMARKS YET
UVO is a new benchmark for open-world class-agnostic object segmentation in videos.
23 PAPERS • 3 BENCHMARKS
The increasing use of deep learning techniques has reduced interpretation time and, ideally, reduced interpreter bias by automatically deriving geological maps from digital outcrop models. However, accurate validation of these automated mapping approaches is a significant challenge due to the subjective nature of geological mapping and the difficulty in collecting quantitative validation data. Additionally, many state-of-the-art deep learning methods are limited to 2D image data, which is insufficient for 3D digital outcrops, such as hyperclouds. To address these challenges, we present Tinto, a multi-sensor benchmark digital outcrop dataset designed to facilitate the development and validation of deep learning approaches for geological mapping, especially for non-structured 3D data like point clouds. Tinto comprises two complementary sets: 1) a real digital outcrop model from Corta Atalaya (Spain), with spectral attributes and ground-truth data, and 2) a synthetic twin that uses latent
The DISRPT 2021 shared task, co-located with CODI 2021 at EMNLP, introduces the second iteration of a cross-formalism shared task on discourse unit segmentation and connective detection, as well as the
3 PAPERS • NO BENCHMARKS YET
The challenge of accurately segmenting individual trees from laser scanning data hinders the assessment of crucial tree parameters necessary for effective forest management, impacting many downstream applications While dense laser scanning offers detailed 3D representations, automating the segmentation of trees and their structures from point clouds remains difficult. Addressing these gaps, the FOR-instance data represent a novel benchmarking dataset to enhance forest measurement using dense airborne laser scanning data, aiding researchers in advancing segmentation In this repository, users will find forest laser scanning point clouds from unamnned aerial vehicle (using Riegl sensors) that are manually segmented according to the individual trees (1130 trees) and
…For more details, please refer to ACCT is a fast and accessible automatic cell counting tool using machine learning for 2D image segmentation.
Human fibrosarcoma HT1080WT (ATCC) cells at low cell densities embedded in 3D collagen type I matrices [1]. The time-lapse videos were recorded every 2 minutes for 16.7 hours and covered a field of view of 1002 pixels × 1004 pixels with a pixel size of 0.802 μm/pixel The videos were pre-processed to correct frame-to-frame drift artifacts, resulting in a final size of 983 pixels × 985 pixels pixels.
…OCTScenes-A dataset, the 0--3099 scenes without segmentation annotation are for training, while the 3100--3199 scenes with segmentation annotation can be used for testing. In the OCTScenes-B dataset, the 0--4899 scenes without segmentation annotation are for training, while the 4900--4999 scenes with segmentation annotation can be used for testing.
Standardized Multi-Channel Dataset for Glaucoma (SMDG-19) is a collection and standardization of 19 public datasets, comprised of full-fundus glaucoma images, associated image metadata like, optic disc segmentation , optic cup segmentation, blood vessel segmentation, and any provided per-instance text metadata like sex and age.
…It annotates inter-segment relations based on COCO panoptic segmentation.
19 PAPERS • 1 BENCHMARK
The Segmenting and Tracking Every Pixel (STEP) benchmark consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation and the Multi-Object Tracking and Segmentation (MOTS) benchmark. This benchmark extends the annotations to the Segmenting and Tracking Every Pixel (STEP) task. [Copy-pasted from http://www.cvlibs.net/datasets/kitti/eval_step.php]
20 PAPERS • 2 BENCHMARKS
An instance segmentation dataset of yeast cells in microstructures. The dataset includes 493 densely annotated microscopy images. For more information see the paper "An Instance Segmentation Dataset of Yeast Cells in Microstructures".
We present a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods.
23 PAPERS • NO BENCHMARKS YET
…The images are annotated by segmentation masks of the object(s) of interest. The original purpose of the data collection is for gesture-aware object-agnostic segmentation tasks.
The MSP-Podcast corpus contains speech segments from podcast recordings which are perceptually annotated using crowdsourcing. The collection of this corpus is an ongoing process. Most of the segments in a regular podcasts are neutral. We use machine learning techniques trained with available data to retrieve candidate segments. These segments are emotionally annotated with crowdsourcing. This approach allows us to spend our resources on speech segments that are likely to convey emotions.
3 PAPERS • 4 BENCHMARKS
Accurate lesion segmentation is critical in stroke rehabilitation research for the quantification of lesion burden and accurate image processing. Current automated lesion segmentation methods for T1-weighted (T1w) MRIs, commonly used in rehabilitation research, lack accuracy and reliability. Manual segmentation remains the gold standard, but it is time-consuming, subjective, and requires significant neuroanatomical expertise. Here we present ATLAS v2.0 (N=1271), a larger dataset of T1w stroke MRIs and manually segmented lesion masks that includes training (public. n=655), test (masks hidden, n=300), and generalizability (completely Algorithm development using this larger sample should lead to more robust solutions, and the hidden test and generalizability datasets allow for unbiased performance evaluation via segmentation challenges
6 PAPERS • 1 BENCHMARK
The largest real-world night-time semantic segmentation dataset with pixel-level labels.
9 PAPERS • NO BENCHMARKS YET
This is a dataset for segmentation and classification of epistemic activities in diagnostic reasoning texts.
Video object segmentation has been studied extensively in the past decade due to its importance in understanding video spatial-temporal structures as well as its value in industrial applications. Previously, we presented the first large-scale video object segmentation dataset named YouTubeVOS and hosted the Large-scale Video Object Segmentation Challenge in conjuction with ECCV 2018, ICCV 2019 This year, we are thrilled to invite you to the 4th Large-scale Video Object Segmentation Challenge in conjunction with CVPR 2022.
5 PAPERS • 1 BENCHMARK