3D Semantic Occupancy Prediction
13 papers with code • 0 benchmarks • 1 datasets
Uses sparse LiDAR semantic labels for training and testing
Benchmarks
These leaderboards are used to track progress in 3D Semantic Occupancy Prediction
Most implemented papers
OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction
The vision-based perception for autonomous driving has undergone a transformation from the bird-eye-view (BEV) representations to the 3D semantic occupancy.
PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic Occupancy Prediction
To address this, we propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively and a PointOcc model to process them efficiently.
InverseMatrixVT3D: An Efficient Projection Matrix-Based Approach for 3D Occupancy Prediction
In contrast, our approach leverages two projection matrices to store the static mapping relationships and matrix multiplications to efficiently generate global Bird's Eye View (BEV) features and local 3D feature volumes.
OccFusion: Multi-Sensor Fusion Framework for 3D Semantic Occupancy Prediction
A comprehensive understanding of 3D scenes is crucial in autonomous vehicles (AVs), and recent models for 3D semantic occupancy prediction have successfully addressed the challenge of describing real-world objects with varied shapes and classes.
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
HyDRa achieves a new state-of-the-art for camera-radar fusion of 64. 2 NDS (+1. 8) and 58. 4 AMOTA (+1. 5) on the public nuScenes dataset.
GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction
To address this, we propose an object-centric representation to describe 3D scenes with sparse 3D semantic Gaussians where each Gaussian represents a flexible region of interest and its semantic features.
DAOcc: 3D Object Detection Assisted Multi-Sensor Fusion for 3D Occupancy Prediction
Multi-sensor fusion significantly enhances the accuracy and robustness of 3D semantic occupancy prediction, which is crucial for autonomous driving and robotics.
ALOcc: Adaptive Lifting-based 3D Semantic Occupancy and Cost Volume-based Flow Prediction
In this work, we strive to improve performance by introducing a series of targeted improvements for 3D semantic occupancy prediction and flow estimation.
Robust 3D Semantic Occupancy Prediction with Calibration-free Spatial Transformation
Recent methods are mainly built on the 2D-to-3D transformation that relies on sensor calibration to project the 2D image information into the 3D space.
GaussianFormer-2: Probabilistic Gaussian Superposition for Efficient 3D Occupancy Prediction
To address this, we propose a probabilistic Gaussian superposition model which interprets each Gaussian as a probability distribution of its neighborhood being occupied and conforms to probabilistic multiplication to derive the overall geometry.