Analysis of cardiac ultrasound images is commonly performed in routine clinical practice for quantification of cardiac function.
The model is trained in a semi-supervised fashion with new reconstruction losses directly aiming to improve pathology segmentation with limited annotations.
Together with the corresponding encoding features, these representations are propagated to decoding layers with U-Net skip-connections.
In this paper, we conduct an empirical study to investigate the role of different biases in content-style disentanglement settings and unveil the relationship between the degree of disentanglement and task performance.
Robust cardiac image segmentation is still an open challenge due to the inability of the existing methods to achieve satisfactory performance on unseen data of different domains.
In this paper, we present a model that is encouraged to disentangle the information of pathology from what seems to be healthy.
Our method synthesises images conditioned on two factors: age (a continuous variable), and status of Alzheimer's Disease (AD, an ordinal variable).
Core to our method is learning a disentangled decomposition into anatomical and imaging factors.
There has been an increasing focus in learning interpretable feature representations, particularly in applications such as medical image analysis that require explainability, whilst relying less on annotated data (since annotations can be tedious and costly).
Inter-modality image registration is an critical preprocessing step for many applications within the routine clinical pathway.
We can venture further and consider that a medical image naturally factors into some spatial factors depicting anatomy and factors that denote the imaging characteristics.
Pseudo healthy synthesis, i. e. the creation of a subject-specific `healthy' image from a pathological one, could be helpful in tasks such as anomaly detection, understanding changes induced by pathology and disease or even as data augmentation.
Specifically, we achieve comparable performance to fully supervised networks using a fraction of labelled images in experiments on ACDC and a dataset from Edinburgh Imaging Facility QMRI.