Rethinking Ensemble-Distillation for Semantic Segmentation Based Unsupervised Domain Adaptation

29 Apr 2021  ·  Chen-Hao Chao, Bo-Wun Cheng, Chun-Yi Lee ·

Recent researches on unsupervised domain adaptation (UDA) have demonstrated that end-to-end ensemble learning frameworks serve as a compelling option for UDA tasks. Nevertheless, these end-to-end ensemble learning methods often lack flexibility as any modification to the ensemble requires retraining of their frameworks. To address this problem, we propose a flexible ensemble-distillation framework for performing semantic segmentation based UDA, allowing any arbitrary composition of the members in the ensemble while still maintaining its superior performance. To achieve such flexibility, our framework is designed to be robust against the output inconsistency and the performance variation of the members within the ensemble. To examine the effectiveness and the robustness of our method, we perform an extensive set of experiments on both GTA5 to Cityscapes and SYNTHIA to Cityscapes benchmarks to quantitatively inspect the improvements achievable by our method. We further provide detailed analyses to validate that our design choices are practical and beneficial. The experimental evidence validates that the proposed method indeed offer superior performance, robustness and flexibility in semantic segmentation based UDA tasks against contemporary baseline methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Domain Adaptation GTAV-to-Cityscapes Labels Re-EnD-UDA mIoU 57.98 # 15
Unsupervised Domain Adaptation SYNTHIA-to-Cityscapes Re-EnD-UDA mIoU (13 classes) 59.95 # 13
mIoU 52.58 # 9

Methods


No methods listed for this paper. Add relevant methods here