CSTNet: A Dual-Branch Convolutional Network for Imaging of Reactive Flows using Chemical Species Tomography

8 Oct 2020  ·  Yunfan Jiang, Jingjing Si, Rui Zhang, Godwin Enemali, Bin Zhou, Hugh McCann, Chang Liu ·

Chemical Species Tomography (CST) has been widely used for in situ imaging of critical parameters, e.g. species concentration and temperature, in reactive flows. However, even with state-of-the-art computational algorithms the method is limited due to the inherently ill-posed and rank-deficient tomographic data inversion, and by high computational cost. These issues hinder its application for real-time flow diagnosis. To address them, we present here a novel CST-based convolutional neural Network (CSTNet) for high-fidelity, rapid, and simultaneous imaging of species concentration and temperature. CSTNet introduces a shared feature extractor that incorporates the CST measurement and sensor layout into the learning network. In addition, a dual-branch architecture is proposed for image reconstruction with crosstalk decoders that automatically learn the naturally correlated distributions of species concentration and temperature. The proposed CSTNet is validated both with simulated datasets, and with measured data from real flames in experiments using an industry-oriented sensor. Superior performance is found relative to previous approaches, in terms of robustness to measurement noise and millisecond-level computing time. This is the first time, to the best of our knowledge, that a deep learning-based algorithm for CST has been experimentally validated for simultaneous imaging of multiple critical parameters in reactive flows using a low-complexity optical sensor with severely limited number of laser beams.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here