Search Results for author: Harald Bayerlein

Found 8 papers, 6 papers with code

Learning to Recharge: UAV Coverage Path Planning through Deep Reinforcement Learning

1 code implementation6 Sep 2023 Mirco Theile, Harald Bayerlein, Marco Caccamo, Alberto L. Sangiovanni-Vincentelli

Coverage path planning (CPP) is a critical problem in robotics, where the goal is to find an efficient path that covers every point in an area of interest.

reinforcement-learning

Model-aided Federated Reinforcement Learning for Multi-UAV Trajectory Planning in IoT Networks

1 code implementation3 Jun 2023 Jichao Chen, Omid Esrafilian, Harald Bayerlein, David Gesbert, Marco Caccamo

Deploying teams of unmanned aerial vehicles (UAVs) to harvest data from distributed Internet of Things (IoT) devices requires efficient trajectory planning and coordination algorithms.

Federated Learning Multi-agent Reinforcement Learning +1

Model-aided Deep Reinforcement Learning for Sample-efficient UAV Trajectory Design in IoT Networks

no code implementations21 Apr 2021 Omid Esrafilian, Harald Bayerlein, David Gesbert

Deep Reinforcement Learning (DRL) is gaining attention as a potential approach to design trajectories for autonomous unmanned aerial vehicles (UAV) used as flying access points in the context of cellular or Internet of Things (IoT) connectivity.

Q-Learning Reinforcement Learning (RL) +1

Multi-UAV Path Planning for Wireless Data Harvesting with Deep Reinforcement Learning

1 code implementation23 Oct 2020 Harald Bayerlein, Mirco Theile, Marco Caccamo, David Gesbert

Harvesting data from distributed Internet of Things (IoT) devices with multiple autonomous unmanned aerial vehicles (UAVs) is a challenging problem requiring flexible path planning methods.

Collision Avoidance Multi-agent Reinforcement Learning +2

UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement Learning Approach

3 code implementations1 Jul 2020 Harald Bayerlein, Mirco Theile, Marco Caccamo, David Gesbert

Autonomous deployment of unmanned aerial vehicles (UAVs) supporting next-generation communication networks requires efficient trajectory planning methods.

reinforcement-learning Reinforcement Learning (RL) +1

UAV Coverage Path Planning under Varying Power Constraints using Deep Reinforcement Learning

2 code implementations5 Mar 2020 Mirco Theile, Harald Bayerlein, Richard Nai, David Gesbert, Marco Caccamo

Coverage path planning (CPP) is the task of designing a trajectory that enables a mobile agent to travel over every point of an area of interest.

Robotics Systems and Control Systems and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.