Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction

9 Mar 2022  ·  Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, Luc van Gool ·

Many algorithms have been developed to solve the inverse problem of coded aperture snapshot spectral imaging (CASSI), i.e., recovering the 3D hyperspectral images (HSIs) from a 2D compressive measurement. In recent years, learning-based methods have demonstrated promising performance and dominated the mainstream research direction. However, existing CNN-based methods show limitations in capturing long-range dependencies and non-local self-similarity. Previous Transformer-based methods densely sample tokens, some of which are uninformative, and calculate the multi-head self-attention (MSA) between some tokens that are unrelated in content. This does not fit the spatially sparse nature of HSI signals and limits the model scalability. In this paper, we propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST), firstly embedding HSI sparsity into deep learning for HSI reconstruction. In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing. Comprehensive experiments show that our CST significantly outperforms state-of-the-art methods while requiring cheaper computational costs. The code and models will be released at https://github.com/caiyuanhao1998/MST

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Spectral Reconstruction CAVE CST-L PSNR 36.12 # 2
SSIM 0.957 # 2
Spectral Reconstruction KAIST CST-L PSNR 36.12 # 2
SSIM 0.957 # 2
Spectral Reconstruction Real HSI CST-L User Study Score 14 # 2

Methods