Area of image inpainting over relatively large missing regions recently
advanced substantially through adaptation of dedicated deep neural networks. However, current network solutions still introduce undesired artifacts and
noise to the repaired regions...
We present an image inpainting method that is
based on the celebrated generative adversarial network (GAN) framework. The
proposed PGGAN method includes a discriminator network that combines a global
GAN (G-GAN) architecture with a patchGAN approach. PGGAN first shares network
layers between G-GAN and patchGAN, then splits paths to produce two adversarial
losses that feed the generator network in order to capture both local
continuity of image texture and pervasive global features in images. The
proposed framework is evaluated extensively, and the results including
comparison to recent state-of-the-art demonstrate that it achieves considerable
improvements on both visual and quantitative evaluations.