Multiscale Context-Aware Ensemble Deep KELM for Efficient Hyperspectral Image Classification

22 Sep 2020  ·  Bobo Xi, Jiaojiao Li, Yunsong Li, Rui Song, Weiwei Sun, Qian Du. ·

Recently, multiscale spatial features have been widely utilized to improve the hyperspectral image (HSI) classification performance. However, fixed-size neighborhood involving the contextual information probably leads to misclassifications, especially for the boundary pixels. Additionally, it has been demonstrated that deep neural network (DNN) is practical to extract representative features for the classification tasks. Nevertheless, under the condition of high dimensionality versus small sample sizes, DNN tends to be over-fitting and it is generally time-consuming due to the deep-level feature learning process. To alleviate the aforementioned issues, we propose a multiscale context-aware ensemble deep kernel extreme learning machine (MSC-EDKELM) for efficient HSI classification. First, the scene of the HSI data set is over-segmented in multiscale via using the adaptive superpixel segmentation technique. Second, superpixel pattern (SP) and attentional neighboring superpixel pattern (ANSP) are generated by leveraging the superpixel maps, which can automatically comprise local and global contextual information, respectively. Afterward, an ensemble deep kernel extreme learning machine (EDKELM) is presented to investigate the deep-level characteristics in the SP and ANSP. Finally, the category of each pixel is accurately determined by the decision fusion and weighted output layer fusion strategy. Experimental results on four real-world HSI data sets demonstrate that the proposed frameworks outperform some classic and state-of-theart methods with high computational efficiency, which can be employed to serve real-time applications.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here