Geography-Aware Self-Supervised Learning

Contrastive learning methods have significantly narrowed the gap between supervised and unsupervised learning on computer vision tasks. In this paper, we explore their application to geo-located datasets, e.g. remote sensing, where unlabeled data is often abundant but labeled data is scarce. We first show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks. To close the gap, we propose novel training methods that exploit the spatio-temporal structure of remote sensing data. We leverage spatially aligned images over time to construct temporal positive pairs in contrastive learning and geo-location to design pre-text tasks. Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing. Moreover, we demonstrate that the proposed method can also be applied to geo-tagged ImageNet images, improving downstream performance on various tasks. Project Webpage can be found at this link geography-aware-ssl.github.io.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Results from the Paper


Ranked #5 on Semantic Segmentation on SpaceNet 1 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation SpaceNet 1 PSANet w/ ResNet50 - FMoW self-supervised pre-training w/ MoCo-V2 + Temporal Positives Mean IoU 78.48 # 5
Semantic Segmentation SpaceNet 1 PSANet w/ ResNet50 backbone - FMoW self-supervised pre-training w/ MoCo-V2 Mean IoU 78.05 # 6
Semantic Segmentation SpaceNet 1 PSANet w/ ResNet50 backbone - FMoW pretrained Mean IoU 75.57 # 7
Semantic Segmentation SpaceNet 1 PSANet w/ ResNet50 backbone - ImageNet pretrained Mean IoU 75.23 # 8
Semantic Segmentation SpaceNet 1 PSANet w/ ResNet50 backbone Mean IoU 74.93 # 9

Methods