Structure-Aware Single-Source Generalization with Pixel-Level Disentanglement for Joint Optic Disc and Cup Segmentation

Deploying deep segmentation models in new medical centers poses a significant challenge due to statistical disparities between source and unknown domains. Recent advancements in domain generalization (DG) have shown improved generalization performance by leveraging disentanglement techniques on domain-specific and domain-invariant features. However, existing DG methods face challenges in achieving optimal feature segregation. To address this, we introduce a pixel-level contrastive single domain generalization (PCSDG) framework and a structure-aware brightness augmentation (SABA) technique for joint optic disc and cup segmentation. First, a disentanglement module captures content and style-related maps, pixel-wise multiplied with the original image to generate saliency-based attention maps, resulting in distinct structure and style representations. Second, Using a contrastive loss in the latent space enhances segregation. Finally, SABA introduces random brightness variations, preserving anatomical information and diversifying sample styles. Experimental validation on two public fundus image datasets with two source domains and five target domains demonstrates the superior performance of PCSDG and SABA across diverse domains when compared to state-of-the-art methods. Our code and models are made public at: https://github.com/HopkinsKwong/PCSDG

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here