Video Inpainting
42 papers with code • 4 benchmarks • 12 datasets
The goal of Video Inpainting is to fill in missing regions of a given video sequence with contents that are both spatially and temporally coherent. Video Inpainting, also known as video completion, has many real-world applications such as undesired object removal and video restoration.
Datasets
Most implemented papers
A Temporally-Aware Interpolation Network for Video Frame Inpainting
We propose the first deep learning solution to video frame inpainting, a challenging instance of the general video inpainting problem with applications in video editing, manipulation, and forensics.
Fast and Accurate Tensor Completion with Total Variation Regularized Tensor Trains
We propose a new tensor completion method based on tensor trains.
Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence
Blind video decaptioning is a problem of automatically removing text overlays and inpainting the occluded parts in videos without any input masks.
Onion-Peel Networks for Deep Video Completion
Given a set of reference images and a target image with holes, our network fills the hole by referring the contents in the reference images.
Copy-and-Paste Networks for Deep Video Inpainting
We propose a novel DNN-based framework called the Copy-and-Paste Networks for video inpainting that takes advantage of additional information in other frames of the video.
An Internal Learning Approach to Video Inpainting
We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images.
AutoRemover: Automatic Object Removal for Autonomous Driving Videos
To deal with shadows, we build up an autonomous driving shadow dataset and design a deep neural network to detect shadows automatically.
Flow-edge Guided Video Completion
We present a new flow-based video completion algorithm.
Progressive Temporal Feature Alignment Network for Video Inpainting
To achieve this goal, it is necessary to find correspondences from neighbouring frames to faithfully hallucinate the unknown content.
Decoupled Spatial-Temporal Transformer for Video Inpainting
Seamless combination of these two novel designs forms a better spatial-temporal attention scheme and our proposed model achieves better performance than state-of-the-art video inpainting approaches with significant boosted efficiency.