Search Results for author: Filippo Vannella

Found 5 papers, 0 papers with code

Learning Optimal Antenna Tilt Control Policies: A Contextual Linear Bandit Approach

no code implementations6 Jan 2022 Filippo Vannella, Alexandre Proutiere, Yassir Jedra, Jaeseong Jeong

In this paper, we devise algorithms learning optimal tilt control policies from existing data (in the so-called passive learning setting) or from data actively generated by the algorithms (the active learning setting).

Active Learning

A Graph Attention Learning Approach to Antenna Tilt Optimization

no code implementations27 Dec 2021 Yifei Jin, Filippo Vannella, Maxime Bouton, Jaeseong Jeong, Ezeddin Al Hakim

GAQ relies on a graph attention mechanism to select relevant neighbors information, improve the agent state representation, and update the tilt control policy based on a history of observations using a Deep Q-Network (DQN).

Graph Attention Q-Learning +1

A Safe Reinforcement Learning Architecture for Antenna Tilt Optimisation

no code implementations2 Dec 2020 Erik Aumayr, Saman Feghhi, Filippo Vannella, Ezeddin Al Hakim, Grigorios Iakovidis

In the context of network management operations, Remote Electrical Tilt (RET) optimisation is a safety-critical application in which exploratory modifications of antenna tilt angles of base stations can cause significant performance degradation in the network.

Management reinforcement-learning +2

Remote Electrical Tilt Optimization via Safe Reinforcement Learning

no code implementations12 Oct 2020 Filippo Vannella, Grigorios Iakovidis, Ezeddin Al Hakim, Erik Aumayr, Saman Feghhi

In this work, we model the RET optimization problem in the Safe Reinforcement Learning (SRL) framework with the goal of learning a tilt control strategy providing performance improvement guarantees with respect to a safe baseline.

reinforcement-learning Reinforcement Learning (RL) +2

Off-policy Learning for Remote Electrical Tilt Optimization

no code implementations21 May 2020 Filippo Vannella, Jaeseong Jeong, Alexandre Proutiere

In this paper, we circumvent these issues by learning an improved policy in an offline manner using existing data collected on real networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.