Overview of image-to-image translation using deep neural networks: denoising, super-resolution, modality-conversion, and reconstruction in medical imaging

21 May 2019  ·  Shizuo Kaji, Satoshi Kida ·

Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. There are a plethora of surveys on applications of neural networks in medical imaging. This article does not aim to cover all the aspects of the field but focus on a particular topic of image-to-image translation. Although the topic may sound unfamiliar, it turns out that many seemingly irrelevant applications can be understood as instances of image-to-image translation. Such applications include (1) noise reduction, (2) super-resolution, (3) image synthesis, and (4) reconstruction. The same underlying principles and algorithms work for various tasks. Our aim is to introduce some of the key ideas on this topic from a uniform view point. We try to be less intimidating with descent level of rigour by exmining core ideas using metaphors. We pay a particular attention to introducing jargons specific to image processing using DNNs. Having an intuitive grasp of the key ideas and the knowledge of technical terms would help the reader greatly to understand the existing and the future applications. Most of recent applications which build on image-to-image translation base on two fundamental architectures called pix2pix and CycleGAN depending on whether the available training data are paired or unpaired. We provide codes which implement these two architectures with various enhancements. Our codes are available online with the very permissive MIT licence. We provide a hands-on tutorial for training a model for denoising using our codes. We hope this article together with the codes will provide both an overview and the details of the key algorithms and will serve as a basis for developing new applications.

PDF Abstract