no code implementations • 6 Jan 2022 • Filippo Vannella, Alexandre Proutiere, Yassir Jedra, Jaeseong Jeong
In this paper, we devise algorithms learning optimal tilt control policies from existing data (in the so-called passive learning setting) or from data actively generated by the algorithms (the active learning setting).
no code implementations • 27 Dec 2021 • Yifei Jin, Filippo Vannella, Maxime Bouton, Jaeseong Jeong, Ezeddin Al Hakim
GAQ relies on a graph attention mechanism to select relevant neighbors information, improve the agent state representation, and update the tilt control policy based on a history of observations using a Deep Q-Network (DQN).
no code implementations • 2 Dec 2020 • Erik Aumayr, Saman Feghhi, Filippo Vannella, Ezeddin Al Hakim, Grigorios Iakovidis
In the context of network management operations, Remote Electrical Tilt (RET) optimisation is a safety-critical application in which exploratory modifications of antenna tilt angles of base stations can cause significant performance degradation in the network.
no code implementations • 12 Oct 2020 • Filippo Vannella, Grigorios Iakovidis, Ezeddin Al Hakim, Erik Aumayr, Saman Feghhi
In this work, we model the RET optimization problem in the Safe Reinforcement Learning (SRL) framework with the goal of learning a tilt control strategy providing performance improvement guarantees with respect to a safe baseline.
no code implementations • 21 May 2020 • Filippo Vannella, Jaeseong Jeong, Alexandre Proutiere
In this paper, we circumvent these issues by learning an improved policy in an offline manner using existing data collected on real networks.