Progressive Image Deraining Networks: A Better and Simpler Baseline

Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of {residual image}. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Single Image Deraining Rain100H PReNet PSNR 29.46 # 10
SSIM 0.899 # 7
Single Image Deraining Rain100L PReNet PSNR 37.48 # 9
SSIM 0.979 # 7
Single Image Deraining Rain12 PReNet PSNR 36.66 # 1
Single Image Deraining Rain1400 PReNetr PSNR 32.44 # 1
SSIM 0.9440000000000001 # 1
Single Image Deraining Test2800 PreNet SSIM 0.916 # 6