3D Semantic Segmentation

168 papers with code • 14 benchmarks • 31 datasets

3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Libraries

Use these libraries to find 3D Semantic Segmentation models and implementations
12 papers
1,124
5 papers
272
3 papers
1,669
See all 7 libraries.

Latest papers with no code

RESSCAL3D: Resolution Scalable 3D Semantic Segmentation of Point Clouds

no code yet • 10 Apr 2024

To the best of our knowledge, the proposed method is the first to propose a resolution-scalable approach for 3D semantic segmentation of point clouds based on deep learning.

Hierarchical Insights: Exploiting Structural Similarities for Reliable 3D Semantic Segmentation

no code yet • 9 Apr 2024

Safety-critical applications like autonomous driving call for robust 3D environment perception algorithms which can withstand highly diverse and ambiguous surroundings.

TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models

no code yet • 18 Mar 2024

Given access to paired image-pointcloud (2D-3D) data, we first optimize a 3D segmentation backbone for the main task of semantic segmentation using the pointclouds and the task of 2D $\to$ 3D KD by using an off-the-shelf 2D pre-trained foundation model.

Real-time 3D semantic occupancy prediction for autonomous vehicles using memory-efficient sparse convolution

no code yet • 13 Mar 2024

In autonomous vehicles, understanding the surrounding 3D environment of the ego vehicle in real-time is essential.

AVS-Net: Point Sampling with Adaptive Voxel Size for 3D Scene Understanding

no code yet • 27 Feb 2024

For such purpose, this paper presents an advanced sampler that achieves both high accuracy and efficiency.

Is Continual Learning Ready for Real-world Challenges?

no code yet • 15 Feb 2024

Our paper aims to initiate a paradigm shift, advocating for the adoption of continual learning methods through new experimental protocols that better emulate real-world conditions to facilitate breakthroughs in the field.

SGS-SLAM: Semantic Gaussian Splatting For Neural Dense SLAM

no code yet • 5 Feb 2024

We present SGS-SLAM, the first semantic visual SLAM system based on Gaussian Splatting.

Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration

no code yet • 23 Jan 2024

First, we propose the learnable transformation alignment to bridge the domain gap between image and point cloud data, converting features into a unified representation space for effective comparison and matching.

POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images

no code yet • NeurIPS 2023

We describe an approach to predict open-vocabulary 3D semantic voxel occupancy map from input 2D images with the objective of enabling 3D grounding, segmentation and retrieval of free-form language queries.

WildScenes: A Benchmark for 2D and 3D Semantic Segmentation in Large-scale Natural Environments

no code yet • 23 Dec 2023

Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and lidar) datasets in urban environments.