In our work, we instead propose an adversarial training for video super-resolution that leads to temporally coherent solutions without sacrificing spatial detail.
In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results.
#6 best model for Video Super-Resolution on Vid4 - 4x upscaling
We propose a novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation.
Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images.
#3 best model for Video Super-Resolution on Vid4 - 4x upscaling
Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance.
#4 best model for Video Super-Resolution on Vid4 - 4x upscaling
Subsequently, a quality enhancement subnet fuses the non-PQF and compensated PQFs, and then reduces the compression artifacts of the non-PQF.
In this paper we present a novel method for the correction of motion artifacts that are present in fetal Magnetic Resonance Imaging (MRI) scans of the whole uterus.
Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed.
#2 best model for Video Frame Interpolation on Vimeo90k