LIDAR Semantic Segmentation
53 papers with code • 4 benchmarks • 7 datasets
Libraries
Use these libraries to find LIDAR Semantic Segmentation models and implementationsDatasets
Latest papers
Optimizing LiDAR Placements for Robust Driving Perception in Adverse Conditions
The robustness of driving perception systems under unprecedented conditions is crucial for safety-critical usages.
OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module to greatly enhance the adaptivity of sparse CNNs at minimal computational cost.
Reflectivity Is All You Need!: Advancing LiDAR Semantic Segmentation
Additionally, we also investigate the possible benefits of using calibrated intensity in semantic segmentation in urban environments (SemanticKITTI) and cross-sensor domain adaptation.
Off-Road LiDAR Intensity Based Semantic Segmentation
LiDAR is used in autonomous driving to provide 3D spatial information and enable accurate perception in off-road environments, aiding in obstacle detection, mapping, and path planning.
Point Transformer V3: Simpler, Faster, Stronger
This paper is not motivated to seek innovation within the attention mechanism.
FRNet: Frustum-Range Networks for Scalable LiDAR Segmentation
Firstly, a frustum feature encoder module is used to extract per-point features within the frustum region, which preserves scene consistency and is crucial for point-level predictions.
PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
In this paper, we introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation, thereby establishing a pathway to 3D foundational models.
SPOT: Scalable 3D Pre-training via Occupancy Prediction for Autonomous Driving
Our contributions are threefold: (1) Occupancy prediction is shown to be promising for learning general representations, which is demonstrated by extensive experiments on plenty of datasets and tasks.
UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase
Besides, we construct the OpenPCSeg codebase, which is the largest and most comprehensive outdoor LiDAR segmentation codebase.
HuBo-VLM: Unified Vision-Language Model designed for HUman roBOt interaction tasks
Human robot interaction is an exciting task, which aimed to guide robots following instructions from human.