Enhancing Video Super-Resolution via Implicit Resampling-based Alignment

arXiv 2024  ·  Kai Xu, Ziwei Yu, Xin Wang, Michael Bi Mi, Angela Yao ·

In video super-resolution, it is common to use a frame-wise alignment to support the propagation of information over time. The role of alignment is well-studied for low-level enhancement in video, but existing works overlook a critical step -- resampling. We show through extensive experiments that for alignment to be effective, the resampling should preserve the reference frequency spectrum while minimizing spatial distortions. However, most existing works simply use a default choice of bilinear interpolation for resampling even though bilinear interpolation has a smoothing effect and hinders super-resolution. From these observations, we propose an implicit resampling-based alignment. The sampling positions are encoded by a sinusoidal positional encoding, while the value is estimated with a coordinate network and a window-based cross-attention. We show that bilinear interpolation inherently attenuates high-frequency information while an MLP-based coordinate network can approximate more frequencies. Experiments on synthetic and real-world datasets show that alignment with our proposed implicit resampling enhances the performance of state-of-the-art frameworks with minimal impact on both compute and parameters.

PDF Abstract arXiv 2024 PDF

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Super-Resolution REDS4- 4x upscaling IART PSNR 32.90 # 1
SSIM 0.9138 # 1
Video Super-Resolution Vid4 - 4x upscaling IART PSNR 28.26 # 1
SSIM 0.8517 # 1

Methods


No methods listed for this paper. Add relevant methods here