Image Inpainting

199 papers with code • 13 benchmarks • 18 datasets

Image Inpainting is a task of reconstructing missing regions in an image. It is an important problem in computer vision and an essential functionality in many imaging and graphics applications, e.g. object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering.

Source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling

Image source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling


Use these libraries to find Image Inpainting models and implementations

Most implemented papers

Image Inpainting for Irregular Holes Using Partial Convolutions

NVIDIA/partialconv ECCV 2018

Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value).

Free-Form Image Inpainting with Gated Convolution

JiahuiYu/generative_inpainting ICCV 2019

We present a generative image inpainting system to complete images with free-form mask and guidance.

Generative Image Inpainting with Contextual Attention

JiahuiYu/generative_inpainting CVPR 2018

Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.

EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning

knazeri/edge-connect 1 Jan 2019

The edge generator hallucinates edges of the missing region (both regular and irregular) of the image, and the image completion network fills in the missing regions using hallucinated edges as a priori.

Implicit Neural Representations with Periodic Activation Functions

lucidrains/deep-daze NeurIPS 2020

However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.

High-Resolution Image Synthesis with Latent Diffusion Models

compvis/latent-diffusion CVPR 2022

By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.

Deep Image Prior

DmitryUlyanov/deep-image-prior CVPR 2018

In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning.

Generative Modeling by Estimating Gradients of the Data Distribution

ermongroup/ncsn NeurIPS 2019

We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching.

Score-Based Generative Modeling through Stochastic Differential Equations

yang-song/score_sde ICLR 2021

Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9. 89 and FID of 2. 20, a competitive likelihood of 2. 99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.

Semantic Image Inpainting with Deep Generative Models

bamos/dcgan-completion.tensorflow CVPR 2017

In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data.