Multi-source Domain Adaptation for Semantic Segmentation

Simulation-to-real domain adaptation for semantic segmentation has been actively studied for various applications such as autonomous driving. Existing methods mainly focus on a single-source setting, which cannot easily handle a more practical scenario of multiple sources with different distributions. In this paper, we propose to investigate multi-source domain adaptation for semantic segmentation. Specifically, we design a novel framework, termed Multi-source Adversarial Domain Aggregation Network (MADAN), which can be trained in an end-to-end manner. First, we generate an adapted domain for each source with dynamic semantic consistency while aligning at the pixel-level cycle-consistently towards the target. Second, we propose sub-domain aggregation discriminator and cross-domain cycle discriminator to make different adapted domains more closely aggregated. Finally, feature-level alignment is performed between the aggregated domain and target domain while training the segmentation network. Extensive experiments from synthetic GTA and SYNTHIA to real Cityscapes and BDDS datasets demonstrate that the proposed MADAN model outperforms state-of-the-art approaches. Our source code is released at: https://github.com/Luodian/MADAN.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Adaptation GTA5+Synscapes to Cityscapes MADAN mIoU 55.7 # 2
Domain Adaptation GTAV+Synscapes to Cityscapes MADAN mIoU 55.7 # 3

Methods


No methods listed for this paper. Add relevant methods here