Performance Analysis on Machine Learning-Based Channel Estimation

10 Nov 2019  ·  Kai Mei, Jun Liu, Xiaochen Zhang, Nandana Rajatheva, Jibo Wei ·

Recently, machine learning-based channel estimation has attracted much attention. The performance of machine learning-based estimation has been validated by simulation experiments. However, little attention has been paid to the theoretical performance analysis. In this paper, we investigate the mean square error (MSE) performance of machine learning-based estimation. Hypothesis testing is employed to analyze its MSE upper bound. Furthermore, we build a statistical model for hypothesis testing, which holds when the linear learning module with a low input dimension is used in machine learning-based channel estimation, and derive a clear analytical relation between the size of the training data and performance. Then, we simulate the machine learning-based channel estimation in orthogonal frequency division multiplexing (OFDM) systems to verify our analysis results. Finally, the design considerations for the situation where only limited training data is available are discussed. In this situation, our analysis results can be applied to assess the performance and support the design of machine learning-based channel estimation.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here