Meta Learning for Multi-agent Communication

Recent works have shown remarkable progress in training artificial agents to understand natural language but are focused on using large amounts of raw data involving huge compute requirements. An interesting hypothesis follows the idea of training artificial agents via multi-agent communication while using small amounts of task-specific human data to ground the emergent language into natural language. This allows agents to communicate with humans without needing enormous expensive human demonstrations. Evolutionary studies have showed that simpler and easily adaptable languages arise as a result of communicating with a diverse group of large population. We propose to model this supposition with artificial agents and propose an adaptive population-based meta-reinforcement learning approach that builds such a population in an iterative manner. We show empirical results on referential games involving natural language where our agents outperform all baselines on both the task performance and language score including human evaluation. We demonstrate that our method induces constructive diversity into a growing population of agents that is beneficial in training the meta-agent.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here