Life detection strategy based on infrared vision and ultra-wideband radar data fusion

16 May 2019  ·  Yin Li, Zhou Y. M. ·

The life detection method based on a single type of information source cannot meet the requirement of post-earthquake rescue due to its limitations in different scenes and bad robustness in life detection. This paper proposes a method based on deep neural network for multi-sensor decision-level fusion which concludes Convolutional Neural Network and Long Short Term Memory neural network (CNN+LSTM). Firstly, we calculate the value of the life detection probability of each sensor with various methods in the same scene simultaneously, which will be gathered to make samples for inputs of the deep neural network. Then we use Convolutional Neural Network (CNN) to extract the distribution characteristics of the spatial domain from inputs which is the two-channel combination of the probability values and the smoothing probability values of each life detection sensor respectively. Furthermore, the sequence time relationship of the outputs from the last layers will be analyzed with Long Short Term Memory (LSTM) layers, then we concatenate the results from three branches of LSTM layers. Finally, two sets of LSTM neural networks that is different from the previous layers are used to integrate the three branches of the features, and the results of the two classifications are output using the fully connected network with Binary Cross Entropy (BEC) loss function. Therefore, the classification results of the life detection can be concluded accurately with the proposed algorithm.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here