Deep multi-modal aggregation network for MR image reconstruction with auxiliary modality

15 Oct 2021  ·  Chun-Mei Feng, Huazhu Fu, Tianfei Zhou, Yong Xu, Ling Shao, David Zhang ·

Magnetic resonance (MR) imaging produces detailed images of organs and tissues with better contrast, but it suffers from a long acquisition time, which makes the image quality vulnerable to say motion artifacts. Recently, many approaches have been developed to reconstruct full-sampled images from partially observed measurements to accelerate MR imaging. However, most approaches focused on reconstruction over a single modality, neglecting the discovery of correlation knowledge between the different modalities. Here we propose a Multi-modal Aggregation network for mR Image recOnstruction with auxiliary modality (MARIO), which is capable of discovering complementary representations from a fully sampled auxiliary modality, with which to hierarchically guide the reconstruction of a given target modality. This implies that our method can selectively aggregate multi-modal representations for better reconstruction, yielding comprehensive, multi-scale, multi-modal feature fusion. Extensive experiments on IXI and fastMRI datasets demonstrate the superiority of the proposed approach over state-of-the-art MR image reconstruction methods in removing artifacts.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here