Unsupervised Data Fusion With Deeper Perspective: A Novel Multisensor Deep Clustering Algorithm

The ever-growing developments in technology to capture different types of image data [e.g., hyperspectral imaging and light detection and ranging (LiDAR)-derived digital surface model (DSM)], along with new processing techniques, have led to a rising interest in imaging applications for Earth observation. However, analyzing such datasets in parallel, remains a challenging task. In this article, we propose a multisensor deep clustering (MDC) algorithm for the joint processing of multisource imaging data. The architecture of MDC is inspired by autoencoder (AE)-based networks. The MDC paradigm includes three parallel networks, a spectral network using an autoencoder structure, a spatial network using a convolutional autoencoder structure, and lastly, a fusion network that reconstructs the concatenated image information from the concatenated latent features from the spatial and spectral network. The proposed algorithm combines the reconstruction losses obtained by the aforementioned networks to optimize the parameters (i.e., weights and bias) of all three networks simultaneously. To validate the performance of the proposed algorithm, we use two multisensor datasets from different applications (i.e., geological and rural sites) as benchmarks. The experimental results confirm the superiority of our proposed deep clustering algorithm compared to a number of state-of-the-art clustering algorithms. The code will be available at [Online]. Available: https://github.com/Kasra2020/MDC .

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods