Memory-Augmented Non-Local Attention for Video Super-Resolution

CVPR 2022  ·  Jiyang Yu, Jingen Liu, Liefeng Bo, Tao Mei ·

In this paper, we propose a novel video super-resolution method that aims at generating high-fidelity high-resolution (HR) videos from low-resolution (LR) ones. Previous methods predominantly leverage temporal neighbor frames to assist the super-resolution of the current frame. Those methods achieve limited performance as they suffer from the challenge in spatial frame alignment and the lack of useful information from similar LR neighbor frames. In contrast, we devise a cross-frame non-local attention mechanism that allows video super-resolution without frame alignment, leading to be more robust to large motions in the video. In addition, to acquire the information beyond neighbor frames, we design a novel memory-augmented attention module to memorize general video details during the super-resolution training. Experimental results indicate that our method can achieve superior performance on large motion videos comparing to the state-of-the-art methods without aligning frames. Our source code will be released.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Analog Video Restoration TAPE MANA LPIPS 0.206 # 7
VMAF 40.28 # 7
PSNR 27.81 # 7
SSIM 0.843 # 7

Methods


No methods listed for this paper. Add relevant methods here