Deep-Reinforcement-Learning-Based Scheduling with Contiguous Resource Allocation for Next-Generation Cellular Systems

11 Oct 2020  ·  Shu Sun, Xiaofeng Li ·

Scheduling plays a pivotal role in multi-user wireless communications, since the quality of service of various users largely depends upon the allocated radio resources. In this paper, we propose a novel scheduling algorithm with contiguous frequency-domain resource allocation (FDRA) based on deep reinforcement learning (DRL) that jointly selects users and allocates resource blocks (RBs). The scheduling problem is modeled as a Markov decision process, and a DRL agent determines which user and how many consecutive RBs for that user should be scheduled at each RB allocation step. The state space, action space, and reward function are delicately designed to train the DRL network. More specifically, the originally quasi-continuous action space, which is inherent to contiguous FDRA, is refined into a finite and discrete action space to obtain a trade-off between the inference latency and system performance. Simulation results show that the proposed DRL-based scheduling algorithm outperforms other representative baseline schemes while having lower online computational complexity.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here