DMML-Net: Deep Metametric Learning for Few-Shot Geographic Object Segmentation in Remote Sensing Imagery

Geographic object segmentation is a fundamental yet challenging problem for remote sensing image interpretation. The prevalent paradigm to solve this problem is to train deep neural networks on massive labeled samples. Although remarkable achievements have been attained, these methods suffer from the severe dependence on the large-scale dataset and require a long training process with high computation burden. To address these issues, a deep metametric learning framework, named DMML-Net, consisting of the metametric learner and the base-metric learner, is proposed for few-shot geographic object segmentation. First, DMML-Net formulates the segmentation as the metric-based pixel classification and develops a deep feature pyramid comparison network as the architecture of the metric learner for multiscale metric learning. Benefiting from this design, the segmentation can be efficiently solved, as well as being robust to deal with the scale variations of geographic objects. Second, an affinity-based fusion mechanism is introduced to adaptively reweight and fuse the semantic information across samples, effectively calibrating the deviation of prototypes induced by the intraclass variations. Third, considering the impact of the large interclass distribution divergences, DMML-Net presents a metametric training paradigm to provide the metric model with flexible scalability for fast adaptation to novel tasks. After metatraining, DMML-Net can be applied for the few-shot segmentation tasks of novel geographic objects with only a few gradient steps on the small training set. Experimental results on two benchmark remote sensing datasets demonstrate the validity and the superiority of our method in low-shot conditions where there are only one to ten labeled samples.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here