In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
The fundamental challenge in causal induction is to infer the underlying graph structure given observational and/or interventional data.
A central goal for AI and causality is thus the joint discovery of abstract representations and causal structure.
First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be.
We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments.
Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer.
To effectively utilize the wealth of potential top-down information available, and to prevent the cacophony of intermixed signals in a bidirectional architecture, mechanisms are needed to restrict information flow.
To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e. g., health, position).