Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation

24 Nov 2021  ·  Jin Hong, Yu-Dong Zhang, Weitian Chen ·

Domain adaptation is crucial for transferring the knowledge from the source labeled CT dataset to the target unlabeled MR dataset in abdominal multi-organ segmentation. Meanwhile, it is highly desirable to avoid the high annotation cost related to the target dataset and protect the source dataset privacy. Therefore, we propose an effective source-free unsupervised domain adaptation method for cross-modality abdominal multi-organ segmentation without source dataset access. The proposed framework comprises two stages. In the first stage, the feature map statistics-guided model adaptation combined with entropy minimization is developed to help the top segmentation network reliably segment the target images. The pseudo-labels output from the top segmentation network are used to guide the style compensation network to generate source-like images. The pseudo-labels output from the middle segmentation network is used to supervise the learning progress of the desired model (bottom segmentation network). In the second stage, the circular learning and pixel-adaptive mask refinement are used to further improve the desired model performance. With this approach, we achieved satisfactory abdominal multi-organ segmentation performance, outperforming the existing state-of-the-art domain adaptation methods. The proposed approach can be easily extended to situations in which target annotation data exist. With only one labeled MR volume, the performance can be levelled with that of supervised learning. Furthermore, the proposed approach is proven to be effective for source-free unsupervised domain adaptation in reverse direction.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here