Multi-Modal Temporal Attention Models for Crop Mapping from Satellite Time Series

14 Dec 2021  ·  Vivien Sainte Fare Garnot, Loic Landrieu, Nesrine Chehata ·

Optical and radar satellite time series are synergetic: optical images contain rich spectral information, while C-band radar captures useful geometrical information and is immune to cloud cover. Motivated by the recent success of temporal attention-based methods across multiple crop mapping tasks, we propose to investigate how these models can be adapted to operate on several modalities. We implement and evaluate multiple fusion schemes, including a novel approach and simple adjustments to the training procedure, significantly improving performance and efficiency with little added complexity. We show that most fusion schemes have advantages and drawbacks, making them relevant for specific settings. We then evaluate the benefit of multimodality across several tasks: parcel classification, pixel-based segmentation, and panoptic parcel segmentation. We show that by leveraging both optical and radar time series, multimodal temporal attention-based models can outmatch single-modality models in terms of performance and resilience to cloud cover. To conduct these experiments, we augment the PASTIS dataset with spatially aligned radar image time series. The resulting dataset, PASTIS-R, constitutes the first large-scale, multimodal, and open-access satellite time series dataset with semantic and instance annotations.

PDF Abstract

Datasets


Introduced in the Paper:

PASTIS-R

Used in the Paper:

PASTIS

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Panoptic Segmentation PASTIS-R Early Fusion SQ 82.2 # 1
RQ 50.6 # 1
PQ 42 # 1
Semantic Segmentation PASTIS-R Late Fusion IoU 66.3 # 1

Methods


No methods listed for this paper. Add relevant methods here