LViT: Language meets Vision Transformer in Medical Image Segmentation

29 Jun 2022  ·  Zihan Li, Yunxiang Li, Qingde Li, You Zhang, Puyang Wang, Dazhou Guo, Le Lu, Dakai Jin, Qingqi Hong ·

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient number of high-quality data with the high cost of data annotation. To overcome the limitation, we propose a new vision-language medical image segmentation model LViT (Language meets Vision Transformer). In our model, medical text annotation is introduced to compensate for the quality deficiency in image data. In addition, the text information can guide the generation of pseudo labels to a certain extent and further guarantee the quality of pseudo labels in semi-supervised learning. We also propose the Exponential Pseudo label Iteration mechanism (EPI) to help extend the semi-supervised version of LViT and the Pixel-Level Attention Module (PLAM) to preserve local features of images. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. To validate the performance of LViT, we construct multimodal medical segmentation datasets (image + text) containing pathological images, X-rays,etc. Experimental results show that our proposed LViT has better segmentation performance in both fully and semi-supervised conditions. Code and datasets are available at

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Medical Image Segmentation MoNuSeg LViT-L F1 81.01 # 1
IoU 68.2 # 1
Medical Image Segmentation MoNuSeg GTUNet F1 79.26 # 6
IoU 65.94 # 6
Medical Image Segmentation MoNuSeg UNet++ F1 77.01 # 7
IoU 63.04 # 7
Medical Image Segmentation MoNuSeg LViT-LW F1 80.66 # 2
IoU 67.71 # 2
Medical Image Segmentation MoNuSeg UCTransNet F1 79.87 # 3
IoU 66.68 # 3