Push the Boundary of SAM: A Pseudo-label Correction Framework for Medical Segmentation

Segment anything model (SAM) has emerged as the leading approach for zero-shot learning in segmentation tasks, offering the advantage of avoiding pixel-wise annotations. It is particularly appealing in medical image segmentation, where the annotation process is laborious and expertise-demanding. However, the direct application of SAM often yields inferior results compared to conventional fully supervised segmentation networks. An alternative approach is to use SAM as the initial stage to generate pseudo labels for further network training. However, the performance is limited by the quality of pseudo labels. In this paper, we propose a novel label correction framework to push the boundary of SAM-based segmentation. Our model utilizes a label quality evaluation module to distinguish between noisy labels and clean labels. This enables the correction of the noisy labels using an uncertainty-based self-correction module, thereby enriching the clean training set. Finally, we retrain the segmentation network with updated labels to optimize its weights for future predictions. One key advantage of our model is its ability to train deep networks using SAM-generated pseudo labels without relying on a set of expert-level annotations while attaining good segmentation performance. We demonstrate the effectiveness of our proposed model on three public datasets, indicating its ability to improve segmentation accuracy and outperform baseline methods in label correction.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.