Paper

MDT-Net: Multi-domain Transfer by Perceptual Supervision for Unpaired Images in OCT Scan

Deep learning models tend to underperform in the presence of domain shifts. Domain transfer has recently emerged as a promising approach wherein images exhibiting a domain shift are transformed into other domains for augmentation or adaptation. However, with the absence of paired and annotated images, models merely learned by adversarial loss and cycle consistency loss could result in poor consistency of anatomy structures during the translation. Additionally, the complexity of learning multi-domain transfer could significantly increase with the number of target domains and source images. In this paper, we propose a multi-domain transfer network, named MDT-Net, to address the limitations above through perceptual supervision. Specifically, our model consists of a single encoder-decoder network and multiple domain-specific transfer modules to disentangle feature representations of the anatomy content and domain variance. Owing to this architecture, the model could significantly reduce the complexity when the translation is conducted among multiple domains. To demonstrate the performance of our method, we evaluate our model qualitatively and quantitatively on RETOUCH, an OCT dataset comprising scans from three different scanner devices (domains). Furthermore, we take the transfer results as additional training data for fluid segmentation to prove the advantage of our model indirectly, i.e., in the task of data adaptation and augmentation. Experimental results show that our method could bring universal improvement in these segmentation tasks, which demonstrates the effectiveness and efficiency of MDT-Net in multi-domain transfer.

Results in Papers With Code
(↓ scroll down to see all results)