Path Design for Cellular-Connected UAV with Reinforcement Learning

9 May 2019  ·  Yong Zeng, Xiaoli Xu ·

This paper studies the path design problem for cellular-connected unmanned aerial vehicle (UAV), which aims to minimize its mission completion time while maintaining good connectivity with the cellular network. We first argue that the conventional path design approach via formulating and solving optimization problems faces several practical challenges, and then propose a new reinforcement learning-based UAV path design algorithm by applying \emph{temporal-difference} method to directly learn the \emph{state-value function} of the corresponding Markov Decision Process. The proposed algorithm is further extended by using linear function approximation with tile coding to deal with large state space. The proposed algorithms only require the raw measured or simulation-generated signal strength as the input and are suitable for both online and offline implementations. Numerical results show that the proposed path designs can successfully avoid the coverage holes of cellular networks even in the complex urban environment.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here