AoI Minimization in Status Update Control with Energy Harvesting Sensors

9 Sep 2020  ·  Mohammad Hatami, Markus Leinonen, Marian Codreanu ·

Information freshness is crucial for time-critical IoT applications, e.g., monitoring and control systems. We consider an IoT status update system with multiple users, multiple energy harvesting sensors, and a wireless edge node. The users receive time-sensitive information about physical quantities, each measured by a sensor. Users send requests to the edge node where a cache contains the most recently received measurements from each sensor. To serve a request, the edge node either commands the sensor to send a status update or retrieves the aged measurement from the cache. We aim at finding the best actions of the edge node to minimize the age of information of the served measurements. We model this problem as a Markov decision process and develop reinforcement learning (RL) algorithms: model-based value iteration and model-free Q-learning methods. We also propose a Q-learning method for the realistic case where the edge node is informed about the sensors' battery levels only via the status updates. The case under transmission limitations is also addressed. Furthermore, properties of an optimal policy are analytically characterized. Simulation results show that an optimal policy is a threshold-based policy and that the proposed RL methods significantly reduce the average cost compared to several baselines.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods