|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame.
Finally, the two input images are warped and linearly fused to form each intermediate frame.
Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously.
#4 best model for Video Frame Interpolation on Middlebury
Many video enhancement algorithms rely on optical flow to register frames in a video sequence.
#5 best model for Video Frame Interpolation on Vimeo90k
Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network.
In addition to the cycle consistency loss, we propose two extensions: motion linearity loss and edge-guided training.
As Deep Neural Networks are becoming more popular, much of the attention is being devoted to Computer Vision problems that used to be solved with more traditional approaches.
In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation.
#3 best model for Video Frame Interpolation on Middlebury
Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed.
#3 best model for Video Frame Interpolation on Vimeo90k
Video frame interpolation is one of the most challenging tasks in video processing research.