LViT: Language meets Vision Transformer in Medical Image Segmentation

29 Jun 2022  ·  Zihan Li, Yunxiang Li, Qingde Li, Puyang Wang, Dazhou Guo, Le Lu, Dakai Jin, You Zhang, Qingqi Hong ·

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Medical Image Segmentation MoNuSeg LViT-L F1 81.01 # 1
IoU 68.2 # 2
Medical Image Segmentation MoNuSeg GTUNet F1 79.26 # 7
IoU 65.94 # 8
Medical Image Segmentation MoNuSeg UNet++ F1 77.01 # 8
IoU 63.04 # 9
Medical Image Segmentation MoNuSeg LViT-LW F1 80.66 # 3
IoU 67.71 # 3
Medical Image Segmentation MoNuSeg UCTransNet F1 79.87 # 4
IoU 66.68 # 4

Methods