Self-supervised Augmentation Consistency for Adapting Semantic Segmentation

CVPR 2021  ·  Nikita Araslanov, Stefan Roth ·

We propose an approach to domain adaptation for semantic segmentation that is both practical and highly accurate. In contrast to previous work, we abandon the use of computationally involved adversarial objectives, network ensembles and style transfer. Instead, we employ standard data augmentation techniques $-$ photometric noise, flipping and scaling $-$ and ensure consistency of the semantic predictions across these image transformations. We develop this principle in a lightweight self-supervised framework trained on co-evolving pseudo labels without the need for cumbersome extra training rounds. Simple in training from a practitioner's standpoint, our approach is remarkably effective. We achieve significant improvements of the state-of-the-art segmentation accuracy after adaptation, consistent both across different choices of the backbone architecture and adaptation scenarios.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels SAC mIoU 53.8 # 30
Domain Adaptation SYNTHIA-to-Cityscapes SAC (ResNet-101) mIoU 52.6 # 17
Synthetic-to-Real Translation SYNTHIA-to-Cityscapes SAC(ResNet-101) MIoU (13 classes) 59.3 # 18
MIoU (16 classes) 52.6 # 17
Domain Adaptation SYNTHIA-to-Cityscapes SAC (VGG-16) mIoU 49.1 # 21

Methods


No methods listed for this paper. Add relevant methods here