|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Finally, the two input images are warped and linearly fused to form each intermediate frame.
Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously.
#3 best model for Video Frame Interpolation on Middlebury
Many video enhancement algorithms rely on optical flow to register frames in a video sequence.
#4 best model for Video Frame Interpolation on Vimeo90k
The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame.
In addition to the cycle consistency loss, we propose two extensions: motion linearity loss and edge-guided training.
As Deep Neural Networks are becoming more popular, much of the attention is being devoted to Computer Vision problems that used to be solved with more traditional approaches.
In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation.
#2 best model for Video Frame Interpolation on Middlebury
Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed.
#2 best model for Video Frame Interpolation on Vimeo90k
Temporal Video Frame Synthesis (TVFS) aims at synthesizing novel frames at timestamps different from existing frames, which has wide applications in video codec, editing and analysis.