Spatially Self-Paced Convolutional Networks for Change Detection in Heterogeneous Images

Change detection in heterogeneous remote sensing images is a challenging problem because it is hard to make a direct comparison in the original observation spaces, and most methods rely on a set of manually labeled samples. In this article, a spatially self-paced convolutional network (SSPCN) is constructed for change detection in an unsupervised way. Self-paced learning (SPL) is incorporated into convolutional networks to dynamically select reliable samples and learn the representation of the relations between the two heterogeneous images. In the proposed method, the pseudo labels are initialized by a classification-based method, and each sample is assigned to a weight to reflect the easiness of the sample. Then, SPL is used to learn the easy samples at first and then gradually take more complex samples into account. In the training process, the sample weights are dynamically updated based on the network parameters. Finally, a binary change map is acquired based on the trained convolutional network. The proposed SSPCN has three main advantages compared to the traditional methods. First, the proposed method is robust to noisy samples because the SSPCN involves the reliable samples into training. Second, the samples have different learning rates for converging to better values, and the learning rates are dynamically changed based on the current sample weights during iterations. Finally, we take the spatial information among the samples into account for further enhancing the robustness of the proposed method. Experimental results on four pairs of heterogeneous remote sensing images confirm the effectiveness of the proposed technique.

PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here