Automated Multiclass Cardiac Volume Segmentation and Model Generation

14 Sep 2019  ·  Erik Gaasedelen, Alex Deakyne, Paul Iaizzo ·

Many strides have been made in semantic segmentation of multiple classes within an image. This has been largely due to advancements in deep learning and convolutional neural networks (CNNs). Features within a CNN are automatically learned during training, which allows for the abstraction of semantic information within the images. These deep learning models are powerful enough to handle the segmentation of multiple classes without the need for multiple networks. Despite these advancements, few attempts have been made to automatically segment multiple anatomical features within medical imaging datasets obtained from CT or MRI scans. This offers a unique challenge because of the three dimensional nature of medical imaging data. In order to alleviate the 3D modality problem, we propose a multi-axis ensemble method, applied to a dataset of 4-cardiac-chamber segmented CT scans. Inspired by the typical three-axis view used by humans, this technique aims to maximize the 3D spatial information afforded to the model, while remaining efficient for consumer grade inference hardware. Multi-axis ensembling along with pragmatic voxel preprocessing have shown in our experiments to greatly increase the mean intersection over union of our predictions over the complete DICOM dataset.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here