Video Super-Resolution
132 papers with code • 15 benchmarks • 13 datasets
Video Super-Resolution is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video.
( Image credit: Detail-revealing Deep Video Super-Resolution )
Libraries
Use these libraries to find Video Super-Resolution models and implementationsDatasets
Latest papers
Collaborative Feedback Discriminative Propagation for Video Super-Resolution
However, inaccurate alignment usually leads to aligned features with significant artifacts, which will be accumulated during propagation and thus affect video restoration.
Deep Blind Super-Resolution for Satellite Video
Therefore, this paper proposes a practical Blind SVSR algorithm (BSVSR) to explore more sharp cues by considering the pixel-wise blur levels in a coarse-to-fine manner.
Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention
The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features.
TMP: Temporal Motion Propagation for Online Video Super-Resolution
Online video super-resolution (online-VSR) highly relies on an effective alignment module to aggregate temporal information, while the strict latency requirement makes accurate and efficient alignment very challenging.
Semantic Lens: Instance-Centric Semantic Alignment for Video Super-Resolution
As a critical clue of video super-resolution (VSR), inter-frame alignment significantly impacts overall performance.
Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution
To ensure the content consistency among adjacent frames, we exploit the temporal dynamics in LR videos to guide the diffusion process by optimizing the latent sampling path with a motion-guided loss, ensuring that the generated HR video maintains a coherent and continuous visual flow.
Enhancing Perceptual Quality in Video Super-Resolution through Temporally-Consistent Detail Synthesis using Diffusion Models
We demonstrate the effectiveness of StableVSR in enhancing the perceptual quality of upscaled videos compared to existing state-of-the-art methods for VSR.
VCISR: Blind Single Image Super-Resolution with Video Compression Synthetic Data
In this work, we for the first time, present a video compression-based degradation model to synthesize low-resolution image data in the blind SISR task.
Scale-Adaptive Feature Aggregation for Efficient Space-Time Video Super-Resolution
The Space-Time Video Super-Resolution (STVSR) task aims to enhance the visual quality of videos, by simultaneously performing video frame interpolation (VFI) and video super-resolution (VSR).
spateGAN: Spatio-Temporal Downscaling of Rainfall Fields Using a cGAN Approach
Our experiments indicate that the ensembles of generated temporally consistent rainfall fields are in high agreement with the observational data.