Proficiency Constrained Multi-Agent Reinforcement Learning for Environment-Adaptive Multi UAV-UGV Teaming

10 Feb 2020  ·  Qifei Yu, Zhexin Shen, Yijiang Pang, Rui Liu ·

A mixed aerial and ground robot team, which includes both unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), is widely used for disaster rescue, social security, precision agriculture, and military missions. However, team capability and corresponding configuration vary since robots have different motion speeds, perceiving ranges, reaching areas, and resilient capabilities to the dynamic environment. Due to heterogeneous robots inside a team and the resilient capabilities of robots, it is challenging to perform a task with an optimal balance between reasonable task allocations and maximum utilization of robot capability. To address this challenge for effective mixed ground and aerial teaming, this paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation by considering the best alignments between robot capabilities, task requirements, and environment conditions. Mix-RL largely exploits robot capabilities while being aware of the adaption of robot capabilities to task requirements and environment conditions. Mix-RL's effectiveness in guiding mixed teaming was validated with the task "social security for criminal vehicle tracking".

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here