LoveDA (Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation)

Introduced by Wang et al. in LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation
  1. 5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan
  2. Focus on different geographical environments between Urban and Rural
  3. Advance both semantic segmentation and domain adaptation tasks
  4. Three considerable challenges:
    • Multi-scale objects
    • Complex background samples
    • Inconsistent class distributions

Two contests are held on the Codalab: LoveDA Semantic Segmentation Challenge, LoveDA Unsupervised Domain Adaptation Challenge

Papers


Paper Code Results Date Stars

Tasks


Similar Datasets


License


Modalities


Languages