Unsupervised Domain Adaptation for the Histopathological Cell Segmentation through Self-Ensembling

Histopathological images are generally considered as the golden standard for clinical diagnosis and cancer grading. Accurate segmentation of cells/nuclei from histopathological images is a critical step to obtain reliable morphological information for quantitative analysis. However, cell/nuclei segmentation relies heavily on well-annotated datasets, which are extremely labor-intensive, time-consuming, and expensive in practical applications. Meanwhile, one might want to fine-tune pretrained models on certain target datasets. But it is always difficult to collect enough target training images for proper fine-tuning. Therefore, there is a need for methods that can transfer learned information from one domain to another without additional target annotations. In this paper, we propose a novel framework for cell segmentation on the unlabeled images through the unsupervised domain adaptation with self-ensembling. It is achieved by applying generative adversarial networks (GANs) for the unsupervised domain adaptation of cell segmentation crossing different tissues. Images in the source and target domain can be differentiated through the learned discriminator. Meanwhile, we present a self-ensembling model to consider the source and the target domain together as a semi-supervised segmentation task to reduce the differences of outputs. Additionally, we introduce conditional random field (CRF) as post-processing to preserve the local consistency on the outputs. We validate our framework with unsupervised domain adaptation on three public cell segmentation datasets captured from different types of tissues, which achieved superior performance in comparison with state-of-the-art.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here