Self-supervised SAR-optical Data Fusion and Land-cover Mapping using Sentinel-1/-2 Images

9 Mar 2021  ·  Yuxing Chen, Lorenzo Bruzzone ·

The effective combination of the complementary information provided by the huge amount of unlabeled multi-sensor data (e.g., Synthetic Aperture Radar (SAR) and optical images) is a critical topic in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multi-view data. However, these methods only focus on image-level features, which may not satisfy the requirement for dense prediction tasks such as land-cover mapping. In this work, we propose a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks. SAR and optical images are fused by using multi-view contrastive loss at image-level and super-pixel level in the early, intermediate and later fashion individually. For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pre-trained features and spectral information of the image itself. Experimental results show that the proposed approach achieves a comparable accuracy and that reduces the dimension of features with respect to the image-level contrastive learning method. Among three fusion fashions, the intermediate fusion strategy achieves the best performance. The combination of the pixel-level fusion approach and spectral indices leads to further improvements on the land-cover mapping task with respect to the image-level fusion approach, especially with few pseudo labels.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods