Exploring Motion Ambiguity and Alignment for High-Quality Video Frame Interpolation

19 Mar 2022  ·  Kun Zhou, Wenbo Li, Xiaoguang Han, Jiangbo Lu ·

For video frame interpolation (VFI), existing deep-learning-based approaches strongly rely on the ground-truth (GT) intermediate frames, which sometimes ignore the non-unique nature of motion judging from the given adjacent frames. As a result, these methods tend to produce averaged solutions that are not clear enough. To alleviate this issue, we propose to relax the requirement of reconstructing an intermediate frame as close to the GT as possible. Towards this end, we develop a texture consistency loss (TCL) upon the assumption that the interpolated content should maintain similar structures with their counterparts in the given frames. Predictions satisfying this constraint are encouraged, though they may differ from the pre-defined GT. Without the bells and whistles, our plug-and-play TCL is capable of improving the performance of existing VFI frameworks. On the other hand, previous methods usually adopt the cost volume or correlation map to achieve more accurate image/feature warping. However, the O(N^2) ({N refers to the pixel count}) computational complexity makes it infeasible for high-resolution cases. In this work, we design a simple, efficient (O(N)) yet powerful cross-scale pyramid alignment (CSPA) module, where multi-scale information is highly exploited. Extensive experiments justify the efficiency and effectiveness of the proposed strategy.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Frame Interpolation Middlebury MA-CSPA PSNR 38.83 # 1
Video Frame Interpolation UCF101 MA-CSPA PSNR 35.43 # 1
SSIM 0.979 # 1
Video Frame Interpolation Vimeo90K MA-CSPA PSNR 36.76 # 1
SSIM 0.9800 # 3

Methods


No methods listed for this paper. Add relevant methods here