Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation

Panoramic images with their 360-degree directional view encompass exhaustive information about the surrounding space, providing a rich foundation for scene understanding. To unfold this potential in the form of robust panoramic segmentation models, large quantities of expensive, pixel-wise annotations are crucial for success. Such annotations are available, but predominantly for narrow-angle, pinhole-camera images which, off the shelf, serve as sub-optimal resources for training panoramic models. Distortions and the distinct image-feature distribution in 360-degree panoramas impede the transfer from the annotation-rich pinhole domain and therefore come with a big dent in performance. To get around this domain difference and bring together semantic annotations from pinhole- and 360-degree surround-visuals, we propose to learn object deformations and panoramic image distortions in the Deformable Patch Embedding (DPE) and Deformable MLP (DMLP) components which blend into our Transformer for PAnoramic Semantic Segmentation (Trans4PASS) model. Finally, we tie together shared semantics in pinhole- and panoramic feature embeddings by generating multi-scale prototype features and aligning them in our Mutual Prototypical Adaptation (MPA) for unsupervised domain adaptation. On the indoor Stanford2D3D dataset, our Trans4PASS with MPA maintains comparable performance to fully-supervised state-of-the-arts, cutting the need for over 1,400 labeled panoramas. On the outdoor DensePASS dataset, we break state-of-the-art by 14.39% mIoU and set the new bar at 56.38%. Code will be made publicly available at https://github.com/jamycheung/Trans4PASS.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Semantic Segmentation Cityscapes val Trans4PASS (Tiny) mIoU 79.1% # 50
Semantic Segmentation Cityscapes val Trans4PASS (Small) mIoU 81.1% # 39
Semantic Segmentation DensePASS Trans4PASS (multi-scale) mIoU 56.38% # 3
Semantic Segmentation DensePASS Trans4PASS (single-scale) mIoU 55.25% # 4
Semantic Segmentation Stanford2D3D Panoramic Trans4PASS (Supervised + Small) mIoU 52.1% # 12
Semantic Segmentation Stanford2D3D Panoramic Trans4PASS (Supervised + Small + MS) mIoU 53.0% # 8
Semantic Segmentation Stanford2D3D Panoramic Trans4PASS (UDA + Source Only) mIoU 48.1% # 17
Semantic Segmentation Stanford2D3D Panoramic Trans4PASS (UDA + MPA) mIoU 50.8% # 15
Semantic Segmentation Stanford2D3D Panoramic Trans4PASS (UDA + MPA + MS) mIoU 51.2% # 14
Semantic Segmentation SynPASS Trans4PASS mIoU 38.57% # 2

Methods