Video Frame Interpolation
95 papers with code • 20 benchmarks • 11 datasets
The goal of Video Frame Interpolation is to synthesize several frames in the middle of two adjacent frames of the original video. Video Frame Interpolation can be applied to generate slow motion video, increase video frame rate, and frame recovery in video streaming.
Libraries
Use these libraries to find Video Frame Interpolation models and implementationsDatasets
Subtasks
Most implemented papers
Enhanced Quadratic Video Interpolation
In this work, we further improve the performance of QVI from three facets and propose an enhanced quadratic video interpolation (EQVI) model.
FILM: Frame Interpolation for Large Motion
Recent methods use multiple networks to estimate optical flow or depth and a separate network dedicated to frame synthesis.
IFRNet: Intermediate Feature Refine Network for Efficient Frame Interpolation
Prevailing video frame interpolation algorithms, that generate the intermediate frames from consecutive inputs, typically rely on complex model architectures with heavy parameters or large delay, hindering them from diverse real-time applications.
BVI-VFI: A Video Quality Database for Video Frame Interpolation
In order to narrow this research gap, we have developed a new video quality database named BVI-VFI, which contains 540 distorted sequences generated by applying five commonly used VFI algorithms to 36 diverse source videos with various spatial resolutions and frame rates.
MOSO: Decomposing MOtion, Scene and Object for Video Prediction
Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.
LDMVFI: Video Frame Interpolation with Latent Diffusion Models
Existing works on video frame interpolation (VFI) mostly employ deep neural networks that are trained by minimizing the L1, L2, or deep feature space distance (e. g. VGG loss) between their outputs and ground-truth frames.
Joint Video Multi-Frame Interpolation and Deblurring under Unknown Exposure Time
Moreover, on the seemingly implausible x16 interpolation task, our method outperforms existing methods by more than 1. 5 dB in terms of PSNR.
Video Frame Interpolation via Adaptive Convolution
Video frame interpolation typically involves two steps: motion estimation and pixel synthesis.
Using phase instead of optical flow for action recognition
We design these complex filters to resemble complex Gabor filters, typically employed for phase-information extraction.
MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement
Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed.