Depth-Attentional Features for Single-Image Rain Removal

Rain is a common weather phenomenon, where object visibility varies with depth from the camera and objects faraway are visually blocked more by fog than by rain streaks. Existing methods and datasets for rain removal, however, ignore these physical properties, thereby limiting the rain removal efficiency on real photos. In this work, we first analyze the visual effects of rain subject to scene depth and formulate a rain imaging model collectively with rain streaks and fog; by then, we prepare a new dataset called RainCityscapes with rain streaks and fog on real outdoor photos. Furthermore, we design an end-to-end deep neural network, where we train it to learn depth-attentional features via a depth-guided attention mechanism, and regress a residual map to produce the rain-free image output. We performed various experiments to visually and quantitatively compare our method with several state-of-the-art methods to demonstrate its superiority over the others.

PDF Abstract

Datasets


Introduced in the Paper:

RainCityscapes

Used in the Paper:

Cityscapes Raindrop
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Single Image Deraining RainCityscapes DAF-Net PSNR 30.06 # 5
SSIM 0.9530 # 5

Methods


No methods listed for this paper. Add relevant methods here