MS-Twins: Multi-Scale Deep Self-Attention Networks for Medical Image Segmentation

12 Dec 2023  ·  Jing Xu ·

Although transformer is preferred in natural language processing, few studies have applied it in the field of medical imaging. For its long-term dependency, the transformer is expected to contribute to unconventional convolution neural net conquer their inherent spatial induction bias. The lately suggested transformer-based partition method only uses the transformer as an auxiliary module to help encode the global context into a convolutional representation. There is hardly any study about how to optimum bond self-attention (the kernel of transformers) with convolution. To solve the problem, the article proposes MS-Twins (Multi-Scale Twins), which is a powerful segmentation model on account of the bond of self-attention and convolution. MS-Twins can better capture semantic and fine-grained information by combining different scales and cascading features. Compared with the existing network structure, MS-Twins has made significant progress on the previous method based on the transformer of two in common use data sets, Synapse and ACDC. In particular, the performance of MS-Twins on Synapse is 8% higher than SwinUNet. Even compared with nnUNet, the best entirely convoluted medical image segmentation network, the performance of MS-Twins on Synapse and ACDC still has a bit advantage.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods