Information Prebuilt Recurrent Reconstruction Network for Video Super-Resolution

10 Dec 2021  ·  Shuyun Wang, Ming Yu, Cuihong Xue, Yingchun Guo, Gang Yan ·

The video super-resolution (VSR) method based on the recurrent convolutional network has strong temporal modeling capability for video sequences. However, the temporal receptive field of different recurrent units in the unidirectional recurrent network is unbalanced. Earlier reconstruction frames receive less spatio-temporal information, resulting in fuzziness or artifacts. Although the bidirectional recurrent network can alleviate this problem, it requires more memory space and fails to perform many tasks with low latency requirements. To solve the above problems, we propose an end-to-end information prebuilt recurrent reconstruction network (IPRRN), consisting of an information prebuilt network (IPNet) and a recurrent reconstruction network (RRNet). By integrating sufficient information from the front of the video to build the hidden state needed for the initially recurrent unit to help restore the earlier frames, the information prebuilt network balances the input information difference at different time steps. In addition, we demonstrate an efficient recurrent reconstruction network, which outperforms the existing unidirectional recurrent schemes in all aspects. Many experiments have verified the effectiveness of the network we propose, which can effectively achieve better quantitative and qualitative evaluation performance compared to the existing state-of-the-art methods.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods