Towards to Robust and Generalized Medical Image Segmentation Framework

9 Aug 2021  ·  Yurong Chen ·

Deep learning-based computer-aided diagnosis is gradually deployed to review and analyze medical images. However, this paradigm is restricted in real-world clinical applications due to the poor robustness and generalization. The issue is more sinister with a lack of training data. In this paper, we address the challenge from the transfer learning point of view. Different from the common setting that transferring knowledge from the natural image domain to the medical image domain, we find the knowledge from the same domain further boosts the model robustness and generalization. Therefore, we propose a novel two-stage framework for robust generalized medical image segmentation. Firstly, an unsupervised tile-wise autoencoder pretraining architecture is proposed to learn local and global knowledge. Secondly, the downstream segmentation model coupled with an auxiliary reconstruction network is designed. The reconstruction branch encourages the model to capture more general semantic features. Experiments of lung segmentation on multi chest X-ray datasets are conducted. Comprehensive results demonstrate the superior robustness of the proposed framework to corruption and high generalization performance on unseen datasets, especially under the scenario of the limited training data.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods