Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation

Unsupervised domain adaptation (DA) has gained substantial interest in semantic segmentation. However, almost all prior arts assume concurrent access to both labeled source and unlabeled target, making them unsuitable for scenarios demanding source-free adaptation. In this work, we enable source-free DA by partitioning the task into two: a) source-only domain generalization and b) source-free target adaptation. Towards the former, we provide theoretical insights to develop a multi-head framework trained with a virtually extended multi-source dataset, aiming to balance generalization and specificity. Towards the latter, we utilize the multi-head framework to extract reliable target pseudo-labels for self-training. Additionally, we introduce a novel conditional prior-enforcing auto-encoder that discourages spatial irregularities, thereby enhancing the pseudo-label quality. Experiments on the standard GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes benchmarks show our superiority even against the non-source-free prior-arts. Further, we show our compatibility with online adaptation enabling deployment in a sequentially changing environment.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Generalization GTA5-to-Cityscapes GtA-SFDA Source-Only (DeepLabv2-ResNet101) mIoU 43.5 # 1
Domain Adaptation GTA5-to-Cityscapes GtA-SFDA (DeepLabv2-ResNet101) mIoU 0.534 # 1
Domain Adaptation SYNTHIA-to-Cityscapes GtA-SFDA (DeepLabv2-ResNet101) mIoU 60.1 # 2


No methods listed for this paper. Add relevant methods here