3D Panoptic Segmentation
7 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
PanopticNDT: Efficient and Robust Panoptic Mapping
As the application scenarios of mobile robots are getting more complex and challenging, scene understanding becomes increasingly crucial.
SAD: Segment Any RGBD
The Segment Anything Model (SAM) has demonstrated its effectiveness in segmenting any part of 2D RGB images.
BUOL: A Bottom-Up Framework with Occupancy-aware Lifting for Panoptic 3D Scene Reconstruction From A Single Image
The 3D voxels are then refined and grouped into 3D instances according to the predicted 2D instance centers.
PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic Segmentation
In this work, we address this limitation by studying camera-based 3D panoptic segmentation, aiming to achieve a unified occupancy representation for camera-only 3D scene understanding.
LiDAR-Camera Panoptic Segmentation via Geometry-Consistent and Semantic-Aware Alignment
3D panoptic segmentation is a challenging perception task that requires both semantic segmentation and instance segmentation.
Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences
Panoptic segmentation of 3D LiDAR scans allows us to semantically describe a vehicle’s environment by predicting semantic classes for each 3D point and to identify individual instances through different instance IDs.
Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering
We introduce a highly efficient method for panoptic segmentation of large 3D point clouds by redefining this task as a scalable graph clustering problem.