DaSeGAN: Domain Adaptation for Segmentation Tasks via Generative Adversarial Networks

29 Sep 2021  ·  Mario Parreño Lara, Roberto Paredes, Alberto Albiol ·

A weakness of deep learning methods is that they can fail when there is a mismatch between source and target data domains. In medical image applications, this is a common situation when data from new vendor devices or different hospitals is available. Domain adaptation techniques aim to fill this gap by generating mappings between image domains when unlabeled data from the new target domain is available. In other cases, no target domain data (labeled or unlabeled) is available during training. In this latter case, domain generalization methods focus on learning domain-invariant representations which are more robust to new domains. In this paper, a combination of domain adaptation and generalization techniques is proposed by leveraging domain-invariant image translations for image segmentation problems. This is achieved by adversarially training a generator that transforms source images to a universal domain. To preserve the semantic consistency between the source and universal domains a segmentation consistency loss between the source and universal predictions is used. Our method was validated on the M&Ms dataset, a multi-source unsupervised domain adaptation, and generalization problem, outperforming previous methods. Particularly, our method significantly boosts the test results over the unlabeled and unseen domains without hurting source-labeled domains.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods