LISNeRF Mapping: LiDAR-based Implicit Mapping via Semantic Neural Fields for Large-Scale 3D Scenes

4 Nov 2023  ·  Jianyuan Zhang, Zhiliu Yang, Meng Zhang ·

Large-scale semantic mapping is crucial for outdoor autonomous agents to fulfill high-level tasks such as planning and navigation. This paper proposes a novel method for large-scale 3D semantic reconstruction through implicit representations from posed LiDAR measurements alone. We first leverage an octree-based and hierarchical structure to store implicit features, then these implicit features are decoded to semantic information and signed distance value through shallow Multilayer Perceptrons (MLPs). We adopt off-the-shelf algorithms to predict the semantic labels and instance IDs of point clouds. We then jointly optimize the feature embeddings and MLPs parameters with a self-supervision paradigm for point cloud geometry and a pseudo-supervision paradigm for semantic and panoptic labels. Subsequently, categories and geometric structures for novel points are regressed, and marching cubes are exploited to subdivide and visualize the scenes in the inferring stage. For scenarios with memory constraints, a map stitching strategy is also developed to merge sub-maps into a complete map. Experiments on two real-world datasets, SemanticKITTI and SemanticPOSS, demonstrate the superior segmentation efficiency and mapping effectiveness of our framework compared to current state-of-the-art 3D LiDAR mapping methods.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here