SpineParseNet: Spine Parsing for Volumetric MR Image by a Two-Stage Segmentation Framework with Semantic Image Representation

Spine parsing (i.e., multi-class segmentation of vertebrae and intervertebral discs (IVDs)) for volumetric magnetic resonance (MR) image plays a significant role in various spinal disease diagnoses and treatments of spine disorders, yet is still a challenge due to the inter-class similarity and intra-class variation of spine images. Existing fully convolutional network based methods failed to explicitly exploit the dependencies between different spinal structures. In this article, we propose a novel two-stage framework named SpineParseNet to achieve automated spine parsing for volumetric MR images. The SpineParseNet consists of a 3D graph convolutional segmentation network (GCSN) for 3D coarse segmentation and a 2D residual U-Net (ResUNet) for 2D segmentation refinement. In 3D GCSN, region pooling is employed to project the image representation to graph representation, in which each node representation denotes a specific spinal structure. The adjacency matrix of the graph is designed according to the connection of spinal structures. The graph representation is evolved by graph convolutions. Subsequently, the proposed region unpooling module re-projects the evolved graph representation to a semantic image representation, which facilitates the 3D GCSN to generate reliable coarse segmentation. Finally, the 2D ResUNet refines the segmentation. Experiments on T2-weighted volumetric MR images of 215 subjects show that SpineParseNet achieves impressive performance with mean Dice similarity coefficients of 87.32 ± 4.75%, 87.78 ± 4.64%, and 87.49 ± 3.81% for the segmentations of 10 vertebrae, 9 IVDs, and all 19 spinal structures respectively. The proposed method has great potential in clinical spinal disease diagnoses and treatments.

PDF

Datasets


Introduced in the Paper:

MRSpineSeg Challenge

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here