MedSRGAN: medical images super-resolution using generative adversarial networks

Super-resolution (SR) in medical imaging is an emerging application in medical imaging due to the needs of high quality images acquired with limited radiation dose, such as low dose Computer Tomography (CT), low field magnetic resonance imaging (MRI). However, because of its complexity and higher visual requirements of medical images, SR is still a challenging task in medical imaging. In this study, we developed a deep learning based method called Medical Images SR using Generative Adversarial Networks (MedSRGAN) for SR in medical imaging. A novel convolutional neural network, Residual Whole Map Attention Network (RWMAN) was developed as the generator network for our MedSRGAN in extracting the useful information through different channels, as well as paying more attention on meaningful regions. In addition, a weighted sum of content loss, adversarial loss, and adversarial feature loss were fused to form a multi-task loss function during the MedSRGAN training. 242 thoracic CT scans and 110 brain MRI scans were collected for training and evaluation of MedSRGAN. The results showed that MedSRGAN not only preserves more texture details but also generates more realistic patterns on reconstructed SR images. A mean opinion score (MOS) test on CT slices scored by five experienced radiologists demonstrates the efficiency of our methods.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here