Copy-and-Paste Networks for Deep Video Inpainting

We present a novel deep learning based algorithm for video inpainting. Video inpainting is a process of completing corrupted or missing regions in videos. Video inpainting has additional challenges compared to image inpainting due to the extra temporal information as well as the need for maintaining the temporal coherency. We propose a novel DNN-based framework called the Copy-and-Paste Networks for video inpainting that takes advantage of additional information in other frames of the video. The network is trained to copy corresponding contents in reference frames and paste them to fill the holes in the target frame. Our network also includes an alignment network that computes affine matrices between frames for the alignment, enabling the network to take information from more distant frames for robustness. Our method produces visually pleasing and temporally coherent results while running faster than the state-of-the-art optimization-based method. In addition, we extend our framework for enhancing over/under exposed frames in videos. Using this enhancement technique, we were able to significantly improve the lane detection accuracy on road videos.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Inpainting YouTube-VOS 2018 CAP PSNR 31.58 # 6
SSIM 0.9607 # 6
VFID 0.071 # 9
Ewarp 0.1470 # 5

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Video Inpainting DAVIS CAP PSNR 30.28 # 6
SSIM 0.9521 # 5
VFID 0.182 # 7
Ewarp 0.1533 # 4

Methods


No methods listed for this paper. Add relevant methods here