Multi-scale self-guided attention for medical image segmentation

arXiv preprint 2019  ·  Ashish Sinha, Jose Dolz ·

Even though convolutional neural networks (CNNs) are driving progress in medical image segmentation, standard models still have some drawbacks. First, the use of multi-scale approaches, i.e., encoder-decoder architectures, leads to a redundant use of information, where similar low-level features are extracted multiple times at multiple scales... Second, long-range feature dependencies are not efficiently modeled, resulting in non-optimal discriminative feature representations associated with each semantic class. In this paper we attempt to overcome these limitations with the proposed architecture, by capturing richer contextual dependencies based on the use of guided self-attention mechanisms. This approach is able to integrate local features with their corresponding global dependencies, as well as highlight interdependent channel maps in an adaptive manner. Further, the additional loss between different modules guides the attention mechanisms to neglect irrelevant information and focus on more discriminant regions of the image by emphasizing relevant feature associations. We evaluate the proposed model in the context of semantic segmentation on three different datasets: abdominal organs, cardiovascular structures and brain tumors. A series of ablation experiments support the importance of these attention modules in the proposed architecture. In addition, compared to other state-of-the-art segmentation networks our model yields better segmentation performance, increasing the accuracy of the predictions while reducing the standard deviation. This demonstrates the efficiency of our approach to generate precise and reliable automatic segmentations of medical images. Our code is made publicly available at https://github.com/sinAshish/Multi-Scale-Attention read more

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Brain Tumor Segmentation BRATS 2018 MS-Dual-Guided Dice Score 0.8037 # 2
MSD 0.9 # 1
VS 93.08 # 1
Medical Image Segmentation CHAOS MRI Dataset MS-Dual-Guided Dice Score 86.75 # 1
MSD 66 # 1
VS 93.85 # 1
Medical Image Segmentation HSVM MS-Dual-Guided Dice Score 83.2 # 1
MSD 1.19 # 1
VS 94.45 # 1

Methods


No methods listed for this paper. Add relevant methods here