Paper

Resource Allocation in Mobility-Aware Federated Learning Networks: A Deep Reinforcement Learning Approach

Federated learning allows mobile devices, i.e., workers, to use their local data to collaboratively train a global model required by the model owner. Federated learning thus addresses the privacy issues of traditional machine learning. However, federated learning faces the energy constraints of the workers and the high network resource cost due to the fact that a number of global model transmissions may be required to achieve the target accuracy. To address the energy constraint, a power beacon can be used that recharges energy to the workers. However, the model owner may need to pay an energy cost to the power beacon for the energy recharge. To address the high network resource cost, the model owner can use a WiFi channel, called default channel, for the global model transmissions. However, communication interruptions may occur due to the instability of the default channel quality. For this, special channels such as LTE channels can be used, but this incurs channel cost. As such, the problem of the model owner is to decide amounts of energy recharged to the workers and to choose channels used to transmit its global model to the workers to maximize the number of global model transmissions while minimizing the energy and channel costs. This is challenging for the model owner under the uncertainty of the channel, energy and mobility states of the workers. In this paper, we thus propose to employ the Deep Q-Network (DQN) that enables the model owner to find the optimal decisions on the energy and the channels without any a priori network knowledge. Simulation results show that the proposed DQN always achieves better performance compared to the conventional algorithms.

Results in Papers With Code
(↓ scroll down to see all results)