Collecting large-scale medical datasets with fine-grained annotations is time-consuming and requires experts.
After training is complete, the discriminator is usually discarded, and only the generator is used for inference.
At inference, the discriminator is discarded, and only the segmentor is used to predict label maps on test images.
In this paper, we conduct an empirical study to investigate the role of different biases in content-style disentanglement settings and unveil the relationship between the degree of disentanglement and task performance.
We evaluated our model on several medical (ACDC, LVSC, CHAOS) and non-medical (PPSS) datasets, and we report performance levels matching those achieved by models trained with fully annotated segmentation masks.
There has been an increasing focus in learning interpretable feature representations, particularly in applications such as medical image analysis that require explainability, whilst relying less on annotated data (since annotations can be tedious and costly).
Recent research put a big effort in the development of deep learning architectures and optimizers obtaining impressive results in areas ranging from vision to language processing.
Skull-stripping methods aim to remove the non-brain tissue from acquisition of brain scans in magnetic resonance (MR) imaging.
Cluster of microcalcifications can be an early sign of breast cancer.
In Europe the 20% of the CT scans cover the thoracic region.