LIDAR Semantic Segmentation
53 papers with code • 4 benchmarks • 7 datasets
Libraries
Use these libraries to find LIDAR Semantic Segmentation models and implementationsDatasets
Most implemented papers
RangeNet++: Fast and Accurate LiDAR Semantic Segmentation
Perception in autonomous vehicles is often carried out through a suite of different sensing modalities.
Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation
However, we found that in the outdoor point cloud, the improvement obtained in this way is quite limited.
LaserMix for Semi-Supervised LiDAR Semantic Segmentation
Densely annotating LiDAR point clouds is costly, which restrains the scalability of fully-supervised learning methods.
CoSMix: Compositional Semantic Mix for Domain Adaptation in 3D LiDAR Segmentation
We propose a new approach of sample mixing for point cloud UDA, namely Compositional Semantic Mix (CoSMix), the first UDA approach for point cloud segmentation based on sample mixing.
CENet: Toward Concise and Efficient LiDAR Semantic Segmentation for Autonomous Driving
Accurate and fast scene understanding is one of the challenging task for autonomous driving, which requires to take full advantage of LiDAR point clouds for semantic segmentation.
PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds
The first is scene-level swapping which exchanges point cloud sectors of two LiDAR scans that are cut along the azimuth axis.
Point Transformer V2: Grouped Vector Attention and Partition-based Pooling
In this work, we analyze the limitations of the Point Transformer and propose our powerful and efficient Point Transformer V2 model with novel designs that overcome the limitations of previous work.
Spherical Transformer for LiDAR-based 3D Recognition
In this work, we study the varying-sparsity distribution of LiDAR points and present SphereFormer to directly aggregate information from dense close points to the sparse distant ones.
FRNet: Frustum-Range Networks for Scalable LiDAR Segmentation
Firstly, a frustum feature encoder module is used to extract per-point features within the frustum region, which preserves scene consistency and is crucial for point-level predictions.
ConvPoint: Continuous Convolutions for Point Cloud Processing
Point clouds are unstructured and unordered data, as opposed to images.