Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation

This work investigates learning pixel-wise semantic image segmentation in urban scenes without any manual annotation, just from the raw non-curated data collected by cars which, equipped with cameras and LiDAR sensors, drive around a city. Our contributions are threefold. First, we propose a novel method for cross-modal unsupervised learning of semantic image segmentation by leveraging synchronized LiDAR and image data. The key ingredient of our method is the use of an object proposal module that analyzes the LiDAR point cloud to obtain proposals for spatially consistent objects. Second, we show that these 3D object proposals can be aligned with the input images and reliably clustered into semantically meaningful pseudo-classes. Finally, we develop a cross-modal distillation approach that leverages image data partially annotated with the resulting pseudo-classes to train a transformer-based model for image semantic segmentation. We show the generalization capabilities of our method by testing on four different testing datasets (Cityscapes, Dark Zurich, Nighttime Driving and ACDC) without any finetuning, and demonstrate significant improvements compared to the current state of the art on this problem. See project webpage https://vobecant.github.io/DriveAndSegment/ for the code and more.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Unsupervised Semantic Segmentation ACDC (Adverse Conditions Dataset with Correspondences) Segmenter ViT-S/16 mIoU 16.7 # 1
Unsupervised Semantic Segmentation Cityscapes val Segmenter ViT-S/16 mIoU 21.8 # 1
Unsupervised Semantic Segmentation Dark Zurich Segmenter ViT-S/16 mIoU 14.2 # 1
Unsupervised Semantic Segmentation Nighttime Driving Segmenter ViT-S/16 mIoU 18.9 # 1

Methods


No methods listed for this paper. Add relevant methods here