Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation

CVPR 2022  Β·  Damien Robert, Bruno Vallet, Loic Landrieu Β·

Recent works on 3D semantic segmentation propose to exploit the synergy between images and point clouds by processing each modality with a dedicated network and projecting learned 2D features onto 3D points. Merging large-scale point clouds and images raises several challenges, such as constructing a mapping between points and pixels, and aggregating features between multiple views. Current methods require mesh reconstruction or specialized sensors to recover occlusions, and use heuristics to select and aggregate available images. In contrast, we propose an end-to-end trainable multi-view aggregation model leveraging the viewing conditions of 3D points to merge features from images taken at arbitrary positions. Our method can combine standard 2D and 3D networks and outperforms both 3D models operating on colorized point clouds and hybrid 2D/3D networks without requiring colorization, meshing, or true depth maps. We set a new state-of-the-art for large-scale indoor/outdoor semantic segmentation on S3DIS (74.7 mIoU 6-Fold) and on KITTI-360 (58.3 mIoU). Our full pipeline is accessible at https://github.com/drprojects/DeepViewAgg, and only requires raw 3D scans and a set of images and poses.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Results from the Paper


 Ranked #1 on 3D Semantic Segmentation on KITTI-360 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Semantic Segmentation KITTI-360 DeepViewAgg miou 58.3 # 1
mIoU Category 73.66 # 2
miou Val 57.8 # 3
Model size 41.2M # 6
3D Semantic Segmentation KITTI-360 MinkowskiNet miou 53.92 # 2
mIoU Category 74.08 # 1
miou Val 54.2 # 4
Model size 37.9M # 5
Semantic Segmentation S3DIS DeepViewAgg Mean IoU 74.7 # 12
mAcc 83.8 # 8
oAcc 90.1 # 13
Number of params 41.2M # 51
Params (M) 41.2 # 2

Methods


No methods listed for this paper. Add relevant methods here