MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Frame Interpolation and Enhancement

Motion estimation (ME) and motion compensation (MC) have dominated classical video frame interpolation systems over the past decades. Recently, the convolutional neural networks set up a new data-driven paradigm for frame interpolation. However, existing learning based methods typically fall into estimating only one of the ME and MC building blocks, resulting in a limited performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation. A novel adaptive warping layer is proposed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. Our method benefits from the ME and MC model-driven architecture while avoiding the conventional hand-crafted design by training on a large amount of video data. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Moreover, our MEMC architecture is a general framework, which can be seamlessly adapted to several video enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art video frame interpolation and enhancement algorithms on a wide range of datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Video Frame Interpolation Middlebury MEMC-NET Interpolation Error 5.24 # 6


No methods listed for this paper. Add relevant methods here