Generative Adversarial Networks And Domain Adaptation For Training Data Independent Image Registration

18 Oct 2019  ·  Dwarikanath Mahapatra ·

Medical image registration is an important task in automated analysis of multi-modal images and temporal data involving multiple patient visits. Conventional approaches, although useful for different image types, are time consuming. Of late, deep learning (DL) based image registration methods have been proposed that outperform traditional methods in terms of accuracy and time. However,DL based methods are heavily dependent on training data and do not generalize well when presented with images of different scanners or anatomies. We present a DL based approach that can perform medical image registration of one image type despite being trained with images of a different type. This is achieved by unsupervised domain adaptation in the registration process and allows for easier application to different datasets without extensive retraining.To achieve our objective we train a network that transforms the given input image pair to a latent feature space vector using autoencoders. The resultant encoded feature space is used to generate the registered images with the help of generative adversarial networks (GANs). This feature transformation ensures greater invariance to the input image type. Experiments on chest Xray, retinal and brain MR images show that our method, trained on one dataset gives better registration performance for other datasets, outperforming conventional methods that do not incorporate domain adaptation

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here