Multi-Depth Fusion Network for Whole-Heart CT Image Segmentation

Obtaining precise whole-heart segmentation from computed tomography (CT) or other imaging techniques is prerequisite to clinically analyze the cardiac status, which plays an important role in the treatment of cardiovascular diseases. However, the whole-heart segmentation is still a challenging task due to the characteristic of medical images, such as far more background voxels than foreground voxels and the indistinct boundaries of adjacent tissues. In this paper, we first present a new deeply supervised 3D UNET which applies multi-depth fusion to the original network for a better extract context information. Then, we apply focal loss to the field of image segmentation and expand its application to multi-category tasks. Finally, the focal loss is incorporated into the Dice loss function (which can be used to solve category imbalance problem) to form a new loss function, which we call hybrid loss. We evaluate our new pipeline on the MICCAI 2017 whole-heart CT dataset, and it obtains a Dice score of 90.73%, which is better than most of the state-of-the-art methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods