Internal Video Inpainting by Implicit Long-range Propagation

ICCV 2021  ·  Hao Ouyang, Tengfei Wang, Qifeng Chen ·

We propose a novel framework for video inpainting by adopting an internal learning strategy. Unlike previous methods that use optical flow for cross-frame context propagation to inpaint unknown regions, we show that this can be achieved implicitly by fitting a convolutional neural network to known regions. Moreover, to handle challenging sequences with ambiguous backgrounds or long-term occlusion, we design two regularization terms to preserve high-frequency details and long-term temporal consistency. Extensive experiments on the DAVIS dataset demonstrate that the proposed method achieves state-of-the-art inpainting quality quantitatively and qualitatively. We further extend the proposed method to another challenging task: learning to remove an object from a video giving a single object mask in only one frame in a 4K video.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Introduced in the Paper:

VideoRemoval4K

Used in the Paper:

DAVIS DAVIS 2017

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods