The 2021 Kidney and Kidney Tumor Segmentation challenge (abbreviated KiTS21) is a competition in which teams compete to develop the best system for automatic semantic segmentation of renal tumors and surrounding The 2021 Kidney and Kidney Tumor Segmentation Challenge The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge
7 PAPERS • 1 BENCHMARK
…In moving object segmentation of point cloud sequences, one has to provide motion labels for each point of the test sequences 11-21. We map all moving-x classes of the original SemanticKITTI semantic segmentation benchmark to a single moving object class. Citation Citation. More information on the task and the metric, you can find in our publication related to the task: @article{chen2021ral, title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach
6 PAPERS • NO BENCHMARKS YET
We present the Dayton Annotated LiDAR Earth Scan (DALES) data set, a new large-scale aerial LiDAR data set with over a half-billion hand-labeled points spanning 10 square kilometers of area and eight object categories. Large annotated point cloud data sets have become the standard for evaluating deep learning methods. However, most of the existing data sets focus on data collected from a mobile or terrestrial scanner with few focusing on aerial data. Point cloud data collected from an Aerial Laser Scanner (ALS) presents a new set of challenges and applications in areas such as 3D urban modeling and large-scale surveillance. DALES is the most extensive publicly available ALS data set with over 400 times the number of points and six times the resolution of other currently available annotated aerial point cloud data sets. This data set gives a critical number of expert verified hand-labeled points for the evaluation of new 3D deep learning algorithms, helping to expand the focus of curren
23 PAPERS • 2 BENCHMARKS
The challenge of accurately segmenting individual trees from laser scanning data hinders the assessment of crucial tree parameters necessary for effective forest management, impacting many downstream applications While dense laser scanning offers detailed 3D representations, automating the segmentation of trees and their structures from point clouds remains difficult. Addressing these gaps, the FOR-instance data represent a novel benchmarking dataset to enhance forest measurement using dense airborne laser scanning data, aiding researchers in advancing segmentation In this repository, users will find forest laser scanning point clouds from unamnned aerial vehicle (using Riegl sensors) that are manually segmented according to the individual trees (1130 trees) and
3 PAPERS • NO BENCHMARKS YET
The ScanNet200 benchmark studies 200-class 3D semantic segmentation - an order of magnitude more class categories than previous 3D scene understanding benchmarks. The source of scene data is identical to ScanNet, but parses a larger vocabulary for semantic and instance segmentation
21 PAPERS • 3 BENCHMARKS
ScribbleKITTI is a scribble-annotated dataset for LiDAR semantic segmentation.
13 PAPERS • 2 BENCHMARKS
Despite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset.
50 PAPERS • 1 BENCHMARK
…Usage: 2D/3D image segmentation Format: HDF5 Libraries to read HDF5 files: 1) silx: https://github.com/silx-kit/silx 2) h5py: https://www.h5py.org 3) pymicro: https://github.com/heprom/pymicro Trained models to segment this dataset: https://doi.org/10.5281/zenodo.4601560 Please cite us as @ARTICLE{10.3389/fmats.2021.761229, AUTHOR={Bertoldo, João P. C. and Decencière, Etienne and Ryckelynck, David and Proudhon, Henry}, TITLE={A Modular U-Net for Automated Segmentation of X-Ray Tomography Images in Composite Materials}, JOURNAL={Frontiers in
1 PAPER • 1 BENCHMARK
1 PAPER • NO BENCHMARKS YET
Swiss3DCities is a dataset that is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution
4 PAPERS • NO BENCHMARKS YET
The SemanticPOSS dataset for 3D semantic segmentation contains 2988 various and complicated LiDAR scans with large quantity of dynamic instances.
56 PAPERS • 2 BENCHMARKS
SemanticKITTI is a large-scale outdoor-scene dataset for point cloud semantic segmentation.
528 PAPERS • 10 BENCHMARKS
The platelet-em dataset contains two 3D scanning electron microscope (EM) images of human platelets, as well as instance and semantic segmentations of those two image volumes.
2 PAPERS • 2 BENCHMARKS
🤖 Robo3D - The SemanticKITTI-C Benchmark SemanticKITTI-C is an evaluation benchmark heading toward robust and reliable 3D semantic segmentation in autonomous driving.
20 PAPERS • 1 BENCHMARK
Our project (STPLS3D) aims to provide a large-scale aerial photogrammetry dataset with synthetic and real annotated 3D point clouds for semantic and instance segmentation tasks.
32 PAPERS • 3 BENCHMARKS