Resolution-robust Large Mask Inpainting with Fourier Convolutions

Modern image inpainting systems, despite the significant progress, often struggle with large missing areas, complex geometric structures, and high-resolution images. We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function. To alleviate this issue, we propose a new method called large mask inpainting (LaMa). LaMa is based on i) a new inpainting network architecture that uses fast Fourier convolutions (FFCs), which have the image-wide receptive field; ii) a high receptive field perceptual loss; iii) large training masks, which unlocks the potential of the first two components. Our inpainting network improves the state-of-the-art across a range of datasets and achieves excellent performance even in challenging scenarios, e.g. completion of periodic structures. Our model generalizes surprisingly well to resolutions that are higher than those seen at train time, and achieves this at lower parameter&time costs than the competitive baselines. The code is available at \url{https://github.com/saic-mdal/lama}.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Inpainting CelebA-HQ LaMa FID 8.15 # 5
P-IDS 2.07 # 4
U-IDS 7.58 # 4
Seeing Beyond the Visible KITTI360-EX LaMa Average PSNR 18.98 # 3
Image Inpainting Places2 LAMA FID 2.97 # 4
P-IDS 13.09 # 4
U-IDS 32.29 # 4

Methods