DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

CVPR 2022  ·  Lukas Hoyer, Dengxin Dai, Luc van Gool ·

As acquiring pixel-wise annotations of real-world images for semantic segmentation is a costly process, a model can instead be trained with more accessible synthetic data and adapted to real images without requiring their annotations. This process is studied in unsupervised domain adaptation (UDA). Even though a large number of methods propose new adaptation strategies, they are mostly based on outdated network architectures. As the influence of recent network architectures has not been systematically studied, we first benchmark different network architectures for UDA and newly reveal the potential of Transformers for UDA semantic segmentation. Based on the findings, we propose a novel UDA method, DAFormer. The network architecture of DAFormer consists of a Transformer encoder and a multi-level context-aware feature fusion decoder. It is enabled by three simple but crucial training strategies to stabilize the training and to avoid overfitting to the source domain: While (1) Rare Class Sampling on the source domain improves the quality of the pseudo-labels by mitigating the confirmation bias of self-training toward common classes, (2) a Thing-Class ImageNet Feature Distance and (3) a learning rate warmup promote feature transfer from ImageNet pretraining. DAFormer represents a major advance in UDA. It improves the state of the art by 10.8 mIoU for GTA-to-Cityscapes and 5.4 mIoU for Synthia-to-Cityscapes and enables learning even difficult classes such as train, bus, and truck well. The implementation is available at https://github.com/lhoyer/DAFormer.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Adaptation Cityscapes to ACDC DAFormer mIoU 55.4 # 9
Semantic Segmentation Dark Zurich DAFormer mIoU 53.8 # 7
Semantic Segmentation DensePASS DAFormer mIoU 54.67% # 5
Domain Adaptation GTA5 to Cityscapes DAFormer mIoU 68.3 # 9
Semantic Segmentation GTAV-to-Cityscapes Labels DAFormer mIoU 68.3 # 6
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels DAFormer mIoU 68.3 # 11
Image-to-Image Translation GTAV-to-Cityscapes Labels DAFormer mIoU 68.3 # 8
Unsupervised Domain Adaptation GTAV-to-Cityscapes Labels DAFormer mIoU 68.3 # 10
Image-to-Image Translation SYNTHIA-to-Cityscapes DAFormer mIoU (13 classes) 67.4 # 7
Domain Adaptation SYNTHIA-to-Cityscapes DAFormer mIoU 60.9 # 9
Synthetic-to-Real Translation SYNTHIA-to-Cityscapes DAFormer MIoU (13 classes) 67.4 # 8
MIoU (16 classes) 60.9 # 9
Semantic Segmentation SYNTHIA-to-Cityscapes DAFormer Mean IoU 60.9 # 6
Unsupervised Domain Adaptation SYNTHIA-to-Cityscapes DAFormer mIoU (13 classes) 67.4 # 8
mIoU 60.9 # 6

Methods