Self-supervised learning guided by masked image modelling, such as Masked AutoEncoder (MAE), has attracted wide attention for pretraining vision transformers in remote sensing.
Ranked #1 on Multi-Label Image Classification on BigEarthNet-S1 (official test set) (using extra training data)
We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning.
Discovering ancient agricultural terraces in desert regions is important for the monitoring of long-term climate changes on the Earth's surface.
Recently, to effectively train the deep learning models with minimal labelled samples, the unlabeled samples are also being leveraged in self-supervised and semi-supervised setting.
Self-supervised pre-training bears potential to generate expressive representations without human annotation.
Ranked #1 on Multi-Label Image Classification on BigEarthNet (official test set) (using extra training data)
In deep learning research, self-supervised learning (SSL) has received great attention triggering interest within both the computer vision and remote sensing communities.
Ranked #3 on Multi-Label Image Classification on BigEarthNet
We present and evaluate a weakly-supervised methodology to quantify the spatio-temporal distribution of urban forests based on remotely sensed data with close-to-zero human interaction.
Experimental results employing the BigEarthNet-MM dataset demonstrate the benefits of both, the ViT backbones and the proposed multimodal SSL algorithm DINO-MM.
A key challenge of supervised learning is the availability of human-labeled data.
We present a remote sensing pipeline that processes LiDAR (Light Detection And Ranging) data through machine & deep learning for the application of archeological feature detection on big geo-spatial data platforms such as e. g. IBM PAIRS Geoscope.