A Jointed Feature Fusion Framework for Photoacoustic Reconstruction

4 Dec 2020  ·  Hengrong Lan, Changchun Yang, Fei Gao ·

Photoacoustic (PA) computed tomography (PACT) reconstructs the initial pressure distribution from raw PA signals. The standard reconstruction of medical image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. Most works remove the artifacts from image domain, and compensate the limited-view from dataset. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. Specifically, our results could generate superior performance, whose artifacts are drastically reduced in the output compared to ground-truth (full-view reconstructed result). In this paper, a quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The numerical and in-vivo results have demonstrated the superior performance of our method to reconstruct the full-view image without artifacts. Finally, quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here