3D Semantic Segmentation
74 papers with code • 7 benchmarks • 15 datasets
By exploiting metric space distances, our network is able to learn local features with increasing contextual scales.
Ranked #4 on 3D Semantic Segmentation on SensatUrban
Submanifold sparse convolutional networks
Ranked #6 on Semantic Segmentation on ScanNet
Traditional approaches to 3D reconstruction rely on an intermediate representation of depth maps prior to estimating a full 3D model of a scene.
Ranked #1 on 3D Reconstruction on ScanNet
To overcome challenges in the 4D space, we propose the hybrid kernel, a special case of the generalized sparse convolution, and the trilateral-stationary conditional random field that enforces spatio-temporal consistency in the 7D space-time-chroma space.
Ranked #5 on Semantic Segmentation on S3DIS Area5
PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding
We present PartNet: a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level, and hierarchical 3D part information.
Ranked #4 on 3D Semantic Segmentation on PartNet
Finally, we use these new concepts to build a very deep 56-layer GCN, and show how it significantly boosts performance (+3. 7% mIoU over state-of-the-art) in the task of point cloud semantic segmentation.
Ranked #19 on Semantic Segmentation on S3DIS
We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points.
Ranked #5 on Semantic Segmentation on Semantic3D