Free-Form Image Inpainting with Gated Convolution

We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Inpainting Places2 DeepFill v2 FID 9.27 # 7
P-IDS 4.01 # 7
U-IDS 21.32 # 7
Image Inpainting Places2 val DeepFillv2 (20-30% free form) FID 13.5 # 3
PD 63.0 # 3
Image Inpainting Places2 val DeepFillv2 (128×128 center mask) FID 15.3 # 5
PD 96.3 # 6

Methods