Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection.
241 PAPERS • 8 BENCHMARKS
…In moving object segmentation of point cloud sequences, one has to provide motion labels for each point of the test sequences 11-21. We map all moving-x classes of the original SemanticKITTI semantic segmentation benchmark to a single moving object class. Citation Citation. More information on the task and the metric, you can find in our publication related to the task: @article{chen2021ral, title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach
6 PAPERS • NO BENCHMARKS YET
The increasing use of deep learning techniques has reduced interpretation time and, ideally, reduced interpreter bias by automatically deriving geological maps from digital outcrop models. However, accurate validation of these automated mapping approaches is a significant challenge due to the subjective nature of geological mapping and the difficulty in collecting quantitative validation data. Additionally, many state-of-the-art deep learning methods are limited to 2D image data, which is insufficient for 3D digital outcrops, such as hyperclouds. To address these challenges, we present Tinto, a multi-sensor benchmark digital outcrop dataset designed to facilitate the development and validation of deep learning approaches for geological mapping, especially for non-structured 3D data like point clouds. Tinto comprises two complementary sets: 1) a real digital outcrop model from Corta Atalaya (Spain), with spectral attributes and ground-truth data, and 2) a synthetic twin that uses latent
1 PAPER • NO BENCHMARKS YET
We present the Dayton Annotated LiDAR Earth Scan (DALES) data set, a new large-scale aerial LiDAR data set with over a half-billion hand-labeled points spanning 10 square kilometers of area and eight object categories. Large annotated point cloud data sets have become the standard for evaluating deep learning methods. However, most of the existing data sets focus on data collected from a mobile or terrestrial scanner with few focusing on aerial data. Point cloud data collected from an Aerial Laser Scanner (ALS) presents a new set of challenges and applications in areas such as 3D urban modeling and large-scale surveillance. DALES is the most extensive publicly available ALS data set with over 400 times the number of points and six times the resolution of other currently available annotated aerial point cloud data sets. This data set gives a critical number of expert verified hand-labeled points for the evaluation of new 3D deep learning algorithms, helping to expand the focus of curren
24 PAPERS • 2 BENCHMARKS
…It can be applied in multiple tasks, such as object detection, instance segmentation, semantic segmentation, free-space segmentation, and waterline segmentation.
8 PAPERS • 2 BENCHMARKS
…for each object: 600 12 megapixel images, sampling the viewing hemisphere 600 registered RGB-D point clouds from a Carmine 1.09 sensor Pose information for each of the above images and point clouds Segmentation masks for each of the above images (and segmented point clouds) Merged point clouds consisting of data from all 600 viewpoints Reconstructed meshes from the merged point clouds Paper: ICRA 2014 "A Large-Scale
The SemanticPOSS dataset for 3D semantic segmentation contains 2988 various and complicated LiDAR scans with large quantity of dynamic instances.
58 PAPERS • 2 BENCHMARKS
…Each RGB image has a corresponding depth and segmentation map. As many as 700 object categories are labeled. The training and testing sets contain 5285 and 5050 images, respectively.
426 PAPERS • 13 BENCHMARKS
Toronto-3D is a large-scale urban outdoor point cloud dataset acquired by an MLS system in Toronto, Canada for semantic segmentation.
21 PAPERS • 1 BENCHMARK
…Each frame has a semantic segmentation of the objects in the scene and information about the camera pose. It is composed by 415 sequences captured in 254 different spaces, in 41 different buildings.
114 PAPERS • NO BENCHMARKS YET
…By collecting data in simulations, multi-modal sensor data and precise ground truth labels are obtainable such as the RGB image, depth image, semantic segmentation, change segmentation, camera poses, and
4 PAPERS • 2 BENCHMARKS
…Leaf wood labels were transferred from contemporaneous (2021) TLS acquisition, for which segmentation was done using LeWoS and onscreen post correction.
1 PAPER • 1 BENCHMARK
…This outdoor dataset introduces falling_snow and accumulated_snow along with all the semanticKITTI classes to further AV tasks like semantic and panoptic segmentation, object detection and tracking, and
🤖 Robo3D - The SemanticKITTI-C Benchmark SemanticKITTI-C is an evaluation benchmark heading toward robust and reliable 3D semantic segmentation in autonomous driving.
22 PAPERS • 1 BENCHMARK
Our project (STPLS3D) aims to provide a large-scale aerial photogrammetry dataset with synthetic and real annotated 3D point clouds for semantic and instance segmentation tasks.
32 PAPERS • 3 BENCHMARKS