In this work, we aim to exploit the intrinsic priors of rainy images and develop intrinsic loss functions to facilitate training deraining networks, which decompose a rainy image into a rain-free background layer and a rainy layer containing intact rain streaks.
Single image deraining regards an input image as a fusion of a background image, a transmission map, rain streaks, and atmosphere light.
Different rain models and novel network structures have been proposed to remove rain streaks from single rainy images.
However, the existing methods usually do not have good generalization ability, which leads to the fact that almost all of existing methods have a satisfied performance on removing a specific type of rain streaks, but may have a relatively poor performance on other types of rain streaks.
To tackle this problem, in this paper, we propose a novel Multimodal Attentive Metric Learning (MAML) method to model user diverse preferences for various items.
Instead of using the estimated atmospheric light directly to learn a network to calculate transmission, we utilize it as ground truth and design a simple but novel triangle-shaped network structure to learn atmospheric light for every rainy image, then fine-tune the network to obtain a better estimation of atmospheric light during the training of transmission network.
Removing rain effects from an image is of importance for various applications such as autonomous driving, drone piloting, and photo editing.
Benefiting from the advancement of computer vision, natural language processing and information retrieval techniques, visual question answering (VQA), which aims to answer questions about an image or a video, has received lots of attentions over the past few years.
We present a novel rain removal method in this paper, which consists of two steps, i. e., detection of rain streaks and reconstruction of the rain-removed image.