SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for Self-Supervised Learning in Earth Observation

Self-supervised pre-training bears potential to generate expressive representations without human annotation. Most pre-training in Earth observation (EO) are based on ImageNet or medium-size, labeled remote sensing (RS) datasets. We share an unlabeled RS dataset SSL4EO-S12 (Self-Supervised Learning for Earth Observation - Sentinel-1/2) to assemble a large-scale, global, multimodal, and multi-seasonal corpus of satellite imagery from the ESA Sentinel-1 \& -2 satellite missions. For EO applications we demonstrate SSL4EO-S12 to succeed in self-supervised pre-training for a set of methods: MoCo-v2, DINO, MAE, and data2vec. Resulting models yield downstream performance close to, or surpassing accuracy measures of supervised learning. In addition, pre-training on SSL4EO-S12 excels compared to existing datasets. We make openly available the dataset, related source code, and pre-trained models at https://github.com/zhu-xlab/SSL4EO-S12.

PDF Abstract

Datasets


Introduced in the Paper:

SSL4EO-S12

Used in the Paper:

BigEarthNet SEN12MS

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multi-Label Image Classification BigEarthNet MoCo-v2 (ResNet50, fine tune) mAP (micro) 91.8 # 1
official split No # 1
Multi-Label Image Classification BigEarthNet MAE (ViT-S/16, fine tune) mAP (micro) 88.9 # 4
official split No # 1
Multi-Label Image Classification BigEarthNet MoCo-v3 (ViT-S/16, fine tune) mAP (micro) 89.9 # 2
official split No # 1
Multi-Label Image Classification BigEarthNet (official test set) MoCov3 (ViT-S/16) mAP (micro) 89.3 # 1
F1 Score 80.5 # 2
Multi-Label Image Classification BigEarthNet (official test set) MoCov2 (ResNet50) mAP (micro) 88.7 # 3
F1 Score 79.8 # 4

Methods