Towards An End-to-End Framework for Flow-Guided Video Inpainting

Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories. However, the hand-crafted flow-based processes in these methods are applied separately to form the whole inpainting pipeline. Thus, these methods are less efficient and rely heavily on the intermediate results from earlier stages. In this paper, we propose an End-to-End framework for Flow-Guided Video Inpainting (E$^2$FGVI) through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content hallucination modules. The three modules correspond with the three stages of previous flow-based methods but can be jointly optimized, leading to a more efficient and effective inpainting process. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively and shows promising efficiency. The code is available at https://github.com/MCG-NKU/E2FGVI.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Inpainting DAVIS E2FGVI PSNR 33.01 # 1
SSIM 0.9721 # 1
VFID 0.116 # 1
Ewarp 0.1315 # 1
Video Inpainting YouTube-VOS 2018 val E2FGVI PSNR 33.71 # 1
SSIM 0.9700 # 1
VFID 0.046 # 1
Ewarp 0.0864 # 1

Methods