Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation

Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30% whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Super-Resolution MSU Video Upscalers: Quality Enhancement VESPCN PSNR 26.92 # 41
SSIM 0.932 # 39
VMAF 53.96 # 11
Video Super-Resolution Vid4 - 4x upscaling bicubic PSNR 23.82 # 21
SSIM 0.6548 # 18
MOVIE 9.31 # 5
Video Super-Resolution Vid4 - 4x upscaling VESPCN PSNR 25.35 # 17
SSIM 0.7557 # 13
MOVIE 5.82 # 2

Methods