Multi-Decoding Deraining Network and Quasi-Sparsity Based Training

CVPR 2021  ·  Yinglong Wang, Chao Ma, Bing Zeng ·

Existing deep deraining models are mainly learned via directly minimizing the statistical differences between rainy images and rain-free ground truths. They emphasize learning a mapping from rainy images to rain-free images with supervision. Despite the demonstrated success, these methods do not perform well on restoring the fine-grained local details or removing blurry rainy traces. In this work, we aim to exploit the intrinsic priors of rainy images and develop intrinsic loss functions to facilitate training deraining networks, which decompose a rainy image into a rain-free background layer and a rainy layer containing intact rain streaks. To this end, we introduce the quasi-sparsity prior to train network so as to generate two sparse layers with intact textures of different objects. Then we explore the low-value prior to compensate sparsity, forcing all rain streaks to enter into one layer while non-rain contents into another layer to restore image details. We introduce a multi-decoding structure to specially supervise the generation of multi-type deraining features. This helps to learn the most contributory features to deraining in respective spaces. Moreover, our model stabilizes the feature values from multi-spaces via information sharing to alleviate potential artifacts, which also accelerates the running speed. Extensive experiments show that the proposed deraining method outperforms the state-of-the-art approaches in terms of effectiveness and efficiency.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here