Paper

Generalizing Emergent Communication

We converted the recently developed BabyAI grid world platform to a sender/receiver setup in order to test the hypothesis that established deep reinforcement learning techniques are sufficient to incentivize the emergence of a grounded discrete communication protocol between generalized agents. This is in contrast to previous experiments that employed straight-through estimation or specialized inductive biases. Our results show that these can indeed be avoided, by instead providing proper environmental incentives. Moreover, they show that a longer interval between communications incentivized more abstract semantics. In some cases, the communicating agents adapted to new environments more quickly than a monolithic agent, showcasing the potential of emergent communication for transfer learning and generalization in general.

Results in Papers With Code
(↓ scroll down to see all results)