Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training

Recent deep networks achieved state of the art performanceon a variety of semantic segmentation tasks. Despite such progress, thesemodels often face challenges in real world “wild tasks” where large differ-ence between labeled training/source data and unseen test/target dataexists. In particular, such difference is often referred to as “domain gap”,and could cause significantly decreased performance which cannot beeasily remedied by further increasing the representation power. Unsuper-vised domain adaptation (UDA) seeks to overcome such problem withouttarget domain labels. In this paper, we propose a novel UDA frameworkbased on an iterative self-training (ST) procedure, where the problemis formulated as latent variable loss minimization, and can be solved byalternatively generating pseudo labels on target data and re-training themodel with these labels. On top of ST, we also propose a novel class-balanced self-training (CBST) framework to avoid the gradual domi-nance of large classes on pseudo-label generation, and introduce spatialpriors to refine generated labels. Comprehensive experiments show thatthe proposed methods achieve state of the art semantic segmentationperformance under multiple major UDA settings.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image-to-Image Translation GTAV-to-Cityscapes Labels CBST mIoU 47.0 # 18
Semi-Supervised Semantic Segmentation nuScenes CBST (Range View) mIoU (1% Labels) 40.9 # 7
mIoU (10% Labels) 60.5 # 7
mIoU (20% Labels) 64.3 # 8
mIoU (50% Labels) 69.3 # 7
Semi-Supervised Semantic Segmentation ScribbleKITTI CBST (Range View) mIoU (1% Labels) 35.7 # 6
mIoU (10% Labels) 50.7 # 3
mIoU (20% Labels) 52.7 # 6
mIoU (50% Labels) 54.6 # 3
Semi-Supervised Semantic Segmentation SemanticKITTI CBST (Range View) mIoU (1% Labels) 39.9 # 6
mIoU (10% Labels) 53.4 # 7
mIoU (20% Labels) 56.1 # 7
mIoU (50% Labels) 56.9 # 9


No methods listed for this paper. Add relevant methods here