Context-Aware Deep Q-Network for Decentralized Cooperative Reconnaissance by a Robotic Swarm

31 Jan 2020  ·  Nishant Mohanty, Mohitvishnu S. Gadde, Suresh Sundaram, Narasimhan Sundararajan, P. B. Sujit ·

One of the crucial problems in robotic swarm-based operation is to search and neutralize heterogeneous targets in an unknown and uncertain environment, without any communication within the swarm. Here, some targets can be neutralized by a single robot, while others need multiple robots in a particular sequence to neutralize them. The complexity in the problem arises due to the scalability and information uncertainty, which restricts the robot's awareness of the swarm and the target distribution. In this paper, this problem is addressed by proposing a novel Context-Aware Deep Q-Network (CA-DQN) framework to obtain communication free cooperation between the robots in the swarm. Each robot maintains an adaptive grid representation of the vicinity with the context information embedded into it to keep the swarm intact while searching and neutralizing the targets. The problem formulation uses a reinforcement learning framework where two Deep Q-Networks (DQNs) handle 'conflict' and 'conflict-free' scenarios separately. The self-play-in-based approach is used to determine the optimal policy for the DQNs. Monte-Carlo simulations and comparison studies with a state-of-the-art coalition formation algorithm are performed to verify the performance of CA-DQN with varying environmental parameters. The results show that the approach is invariant to the number of detected targets and the number of robots in the swarm. The paper also presents the real-time implementation of CA-DQN for different scenarios using ground robots in a laboratory environment to demonstrate the working of CA-DQN with low-power computing devices.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Multiagent Systems

Datasets


  Add Datasets introduced or used in this paper