Learning Robust Medical Image Segmentation from Multi-source Annotations

2 Apr 2023  ·  Yifeng Wang, Luyang Luo, Mingxiang Wu, Qiong Wang, Hao Chen ·

Collecting annotations from multiple independent sources could mitigate the impact of potential noises and biases from a single source, which is a common practice in medical image segmentation. Learning segmentation networks from multi-source annotations remains a challenge due to the uncertainties brought by the variance of annotations and the quality of images. In this paper, we propose an Uncertainty-guided Multi-source Annotation Network (UMA-Net), which guides the training process by uncertainty estimation at both the pixel and the image levels. First, we developed the annotation uncertainty estimation module (AUEM) to learn the pixel-wise uncertainty of each annotation, which then guided the network to learn from reliable pixels by weighted segmentation loss. Second, a quality assessment module (QAM) was proposed to assess the image-level quality of the input samples based on the former assessed annotation uncertainties. Importantly, we introduced an auxiliary predictor to learn from the low-quality samples instead of discarding them, which ensured the preservation of their representation knowledge in the backbone without directly accumulating errors within the primary predictor. Extensive experiments demonstrated the effectiveness and feasibility of our proposed UMA-Net on various datasets, including 2D chest X-ray segmentation, fundus image segmentation, and 3D breast DCE-MRI segmentation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here