Paper

A Federated Reinforcement Learning Method with Quantization for Cooperative Edge Caching in Fog Radio Access Networks

In this paper, cooperative edge caching problem is studied in fog radio access networks (F-RANs). Given the non-deterministic polynomial hard (NP-hard) property of the problem, a dueling deep Q network (Dueling DQN) based caching update algorithm is proposed to make an optimal caching decision by learning the dynamic network environment. In order to protect user data privacy and solve the problem of slow convergence of the single deep reinforcement learning (DRL) model training, we propose a federated reinforcement learning method with quantization (FRLQ) to implement cooperative training of models from multiple fog access points (F-APs) in F-RANs. To address the excessive consumption of communications resources caused by model transmission, we prune and quantize the shared DRL models to reduce the number of model transfer parameters. The communications interval is increased and the communications rounds are reduced by periodical model global aggregation. We analyze the global convergence and computational complexity of our policy. Simulation results verify that our policy has better performance in reducing user request delay and improving cache hit rate compared to benchmark schemes. The proposed policy is also shown to have faster training speed and higher communications efficiency with minimal loss of model accuracy.

Results in Papers With Code
(↓ scroll down to see all results)