Large-Scale Graph Reinforcement Learning in Wireless Control Systems

24 Jan 2022  ·  Vinicius Lima, Mark Eisen, Konstantinos Gatsis, Alejandro Ribeiro ·

Modern control systems routinely employ wireless networks to exchange information between spatially distributed plants, actuators and sensors. With wireless networks defined by random, rapidly changing transmission conditions that challenge assumptions commonly held in the design of control systems, proper allocation of communication resources is essential to achieve reliable operation. Designing resource allocation policies, however, is challenging, motivating recent works to successfully exploit deep learning and deep reinforcement learning techniques to design resource allocation and scheduling policies for wireless control systems (WCSs). As the number of learnable parameters in a neural network grows with the size of the input signal, deep reinforcement learning may fail to scale, limiting the immediate generalization of such scheduling and resource allocation policies to large-scale systems. The interference and fading patterns among plants and controllers in the network, however, induce a time-varying graph that can be used to construct policy representations based on graph neural networks (GNNs), with the number of learnable parameters now independent of the number of plants in the network. We further establish in the context of WCSs that, due to inherent invariance to graph permutations, the GNN is able to model scalable and transferable resource allocation policies, which are subsequently trained with primal-dual reinforcement learning. Numerical experiments show that the proposed graph reinforcement learning approach yields policies that not only outperform baseline solutions and deep reinforcement learning based policies in large-scale systems, but that can also be transferred across networks of varying size.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here