Complex-Valued End-to-end Deep Network with Coherency Preservation for Complex-Valued SAR Data Reconstruction and Classification

Deep learning models have achieved remarkable success in many different fields and attracted many interests. Several researchers attempted to apply deep learning models to synthetic aperture radar (SAR) data processing, but it did not have the same breakthrough as the other fields, including optical remote sensing. SAR data are in complex domain by nature and processing them with real-valued (RV) networks neglects the phase component which conveys important and distinctive information. A complex-valued (CV) end-to-end deep network is developed in this study for the reconstruction and classification of CV-SAR data. Azimuth subaperture decomposition is utilized to incorporate physics-aware attributes of the CV-SAR into the deep model. Moreover, the correlation coefficient amplitude (coherence) of the CV-SAR images depends on the SAR system characteristics and physical properties of the target. This coherency should be considered and preserved in the processing chain of the CV-SAR data. The coherency preservation of the CV deep networks for CV-SAR images, which is mostly neglected in the literature, is evaluated in this study. Furthermore, a large-scale CV-SAR annotated dataset for the evaluation of the CV deep networks is lacking. A semantically annotated CV-SAR dataset from Sentinel-1 single look complex stripmap mode data [S1SLC_CVDL (complex-valued deep learning) dataset] is developed and introduced in this study. The experimental analysis demonstrated the better performance of the developed CV deep network for CV-SAR data classification and reconstruction in comparison with the equivalent RV model and more complicated RV architectures, as well as its coherency preservation and physics-aware capability.

PDF

Datasets


Introduced in the Paper:

S1SLC_CVDL

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here