A Decentralized Communication Policy for Multi Agent Multi Armed Bandit Problems

7 Oct 2019  ·  Pathmanathan Pankayaraj, D. H. S. Maithripala ·

This paper proposes a novel policy for a group of agents to, individually as well as collectively, solve a multi armed bandit (MAB) problem. The policy relies solely on the information that an agent has obtained through sampling of the options on its own and through communication with neighbors. The option selection policy is based on an Upper Confidence Based (UCB) strategy while the communication strategy that is proposed forces agents to communicate with other agents who they believe are most likely to be exploring than exploiting. The overall strategy is shown to significantly outperform an independent Erd\H{o}s-R\'{e}nyi (ER) graph based random communication policy. The policy is shown to be cost effective in terms of communication and thus to be easily scalable to a large network of agents.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here