In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data is only available from one source domain, which is common in medical imaging applications.
The success of neural networks on medical image segmentation tasks typically relies on large labeled datasets for model training.
In this paper, we present a cooperative framework for training image segmentation models and a latent space augmentation method for generating hard examples.
Few-shot semantic segmentation (FSS) has great potential for medical imaging applications.
In this work, we propose an adversarial data augmentation method for training neural networks for medical image segmentation.
no code implementations • 22 Jun 2020 • Xiahai Zhuang, Jiahang Xu, Xinzhe Luo, Chen Chen, Cheng Ouyang, Daniel Rueckert, Victor M. Campello, Karim Lekadir, Sulaiman Vesal, Nishant Ravikumar, Yashu Liu, Gongning Luo, Jingkun Chen, Hongwei Li, Buntheng Ly, Maxime Sermesant, Holger Roth, Wentao Zhu, Jiexiang Wang, Xinghao Ding, Xinyue Wang, Sen yang, Lei LI
In addition, the paired MS-CMR images could enable algorithms to combine the complementary information from the other sequences for the segmentation of LGE CMR.
We show that the proposed approaches are competitive relative to the state of the art both quantitatively and qualitatively.
In this work, we present a fully automatic method to segment cardiac structures from late-gadolinium enhanced (LGE) images without using labelled LGE data for training, but instead by transferring the anatomical knowledge and features learned on annotated balanced steady-state free precession (bSSFP) images, which are easier to acquire.
In this work, we propose a deep learning approach for parallel magnetic resonance imaging (MRI) reconstruction, termed a variable splitting network (VS-Net), for an efficient, high-quality reconstruction of undersampled multi-coil MR data.
Deep unsupervised domain adaptation (UDA) aims to improve the performance of a deep neural network model on a target domain, using solely unlabelled target domain data and labelled source domain data.
We look into robustness of deep learning based MRI reconstruction when tested on unseen contrasts and organs.
In this paper, we propose the PnPAdaNet (plug-and-play adversarial domain adaptation network) for adapting segmentation networks between different modalities of medical images, e. g., MRI and CT. We propose to tackle the significant domain shift by aligning the feature spaces of source and target domains in an unsupervised manner.
The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions.