Video Inpainting

43 papers with code • 4 benchmarks • 12 datasets

The goal of Video Inpainting is to fill in missing regions of a given video sequence with contents that are both spatially and temporally coherent. Video Inpainting, also known as video completion, has many real-world applications such as undesired object removal and video restoration.

Source: Deep Flow-Guided Video Inpainting

Most implemented papers

Beyond the Field-of-View: Enhancing Scene Visibility and Perception with Clip-Recurrent Transformer

masterhow/flowlens 21 Nov 2022

In this paper, we propose the concept of online video inpainting for autonomous vehicles to expand the field of view, thereby enhancing scene visibility, perception, and system safety.

Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN

amjltc295/Free-Form-Video-Inpainting ICCV 2019

Free-form video inpainting is a very challenging task that could be widely used for video editing such as text removal.

Deep Video Inpainting

mcahny/Deep-Video-Inpainting CVPR 2019

Video inpainting aims to fill spatio-temporal holes with plausible content in a video.

Deep Flow-Guided Video Inpainting

nbei/Deep-Flow-Guided-Video-Inpainting CVPR 2019

Then the synthesized flow field is used to guide the propagation of pixels to fill up the missing regions in the video.

Learnable Gated Temporal Shift Module for Deep Video Inpainting

amjltc295/Free-Form-Video-Inpainting 2 Jul 2019

How to efficiently utilize temporal information to recover videos in a consistent way is the main issue for video inpainting problems.

DVI: Depth Guided Video Inpainting for Autonomous Driving

sibozhang/Depth-Guided-Inpainting ECCV 2020

To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud.

Learning Joint Spatial-Temporal Transformations for Video Inpainting

researchmm/STTN ECCV 2020

In this paper, we propose to learn a joint Spatial-Temporal Transformer Network (STTN) for video inpainting.

Towards An End-to-End Framework for Flow-Guided Video Inpainting


Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories.

Exploiting Optical Flow Guidance for Transformer-Based Video Inpainting

hitachinsk/fgt 24 Jan 2023

Transformers have been widely used for video processing owing to the multi-head self attention (MHSA) mechanism.

Improving Video Generation for Multi-functional Applications

bernhard2202/improved-video-gan 30 Nov 2017

In this paper, we aim to improve the state-of-the-art video generative adversarial networks (GANs) with a view towards multi-functional applications.