Real-world Video Deblurring: A Benchmark Dataset and An Efficient Recurrent Neural Network

Real-world video deblurring in real time still remains a challenging task due to the complexity of spatially and temporally varying blur itself and the requirement of low computational cost. To improve the network efficiency, we adopt residual dense blocks into RNN cells, so as to efficiently extract the spatial features of the current frame. Furthermore, a global spatio-temporal attention module is proposed to fuse the effective hierarchical features from past and future frames to help better deblur the current frame. Another issue that needs to be addressed urgently is the lack of a real-world benchmark dataset. Thus, we contribute a novel dataset (BSD) to the community, by collecting paired blurry/sharp video clips using a co-axis beam splitter acquisition system. Experimental results show that the proposed method (ESTRNN) can achieve better deblurring performance both quantitatively and qualitatively with less computational cost against state-of-the-art video deblurring methods. In addition, cross-validation experiments between datasets illustrate the high generality of BSD over the synthetic datasets. The code and dataset are released at https://github.com/zzh-tech/ESTRNN.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Results from the Paper


Ranked #34 on Image Deblurring on GoPro (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Deblurring GoPro ESTRNN PSNR 31.07 # 34
SSIM 0.9023 # 38
Deblurring GoPro ESTRNN PSNR 31.07 # 39
SSIM 0.9023 # 44

Methods


No methods listed for this paper. Add relevant methods here