Spatio-Temporal SAR-Optical Data Fusion for Cloud Removal via a Deep Hierarchical Model

Cloud removal is a relevant topic in Remote Sensing as it fosters the usability of high-resolution optical images for Earth monitoring and study. Related techniques have been analyzed for years with a progressively clearer view of the appropriate methods to adopt, from multi-spectral to inpainting methods. Recent applications of deep generative models and sequence-to-sequence-based models have proved their capability to advance the field significantly. Nevertheless, there are still some gaps, mostly related to the amount of cloud coverage, the density and thickness of clouds, and the occurred temporal landscape changes. In this work, we fill some of these gaps by introducing a novel multi-modal method that uses different sources of information, both spatial and temporal, to restore the whole optical scene of interest. The proposed method introduces an innovative deep model, using the outcomes of both temporal-sequence blending and direct translation from Synthetic Aperture Radar (SAR) to optical images to obtain a pixel-wise restoration of the whole scene. The advantage of our approach is demonstrated across a variety of atmospheric conditions tested on a dataset we have generated and made available. Quantitative and qualitative results prove that the proposed method obtains cloud-free images, preserving scene details without resorting to a huge portion of a clean image and coping with landscape changes.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods