Context encoding enables machine learning-based quantitative photoacoustics

12 Jun 2017  ·  Thomas Kirchner, Janek Gröhl, Lena Maier-Hein ·

Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. While photoacoustic (PA) imaging is a novel modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. In this paper, we introduce the first machine learning based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding (CE)-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here