SUSAN: Segment Unannotated image Structure using Adversarial Network

3 Dec 2018  ·  Fang Liu ·

Segmentation of magnetic resonance (MR) images is a fundamental step in many medical imaging-based applications. The recent implementation of deep convolutional neural networks (CNNs) in image processing has been shown to have significant impacts on medical image segmentation. Network training of segmentation CNNs typically requires images and paired annotation data representing pixel-wise tissue labels referred to as masks. However, the supervised training of highly efficient CNNs with deeper structure and more network parameters requires a large number of training images and paired tissue masks. Thus, there is great need to develop a generalized CNN-based segmentation method which would be applicable for a wide variety of MR image datasets with different tissue contrasts. The purpose of this study was to develop and evaluate a generalized CNN-based method for fully-automated segmentation of different MR image datasets using a single set of annotated training data. A technique called cycle-consistent generative adversarial network (CycleGAN) is applied as the core of the proposed method to perform image-to-image translation between MR image datasets with different tissue contrasts. A joint segmentation network is incorporated into the adversarial network to obtain additional segmentation functionality. The proposed method was evaluated for segmenting bone and cartilage on two clinical knee MR image datasets acquired at our institution using only a single set of annotated data from a publicly available knee MR image dataset. The new technique may further improve the applicability and efficiency of CNN-based segmentation of medical images while eliminating the need for large amounts of annotated training data.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here