Graph Models

# Message Passing Neural Network

Introduced by Gilmer et al. in Neural Message Passing for Quantum Chemistry

There are at least eight notable examples of models from the literature that can be described using the Message Passing Neural Networks (MPNN) framework. For simplicity we describe MPNNs which operate on undirected graphs $G$ with node features $x_{v}$ and edge features $e_{vw}$. It is trivial to extend the formalism to directed multigraphs. The forward pass has two phases, a message passing phase and a readout phase. The message passing phase runs for $T$ time steps and is defined in terms of message functions $M_{t}$ and vertex update functions $U_{t}$. During the message passing phase, hidden states $h_{v}^{t}$ at each node in the graph are updated based on messages $m_{v}^{t+1}$ according to $$m_{v}^{t+1} = \sum_{w \in N(v)} M_{t}(h_{v}^{t}, h_{w}^{t}, e_{vw})$$ $$h_{v}^{t+1} = U_{t}(h_{v}^{t}, m_{v}^{t+1})$$ where in the sum, $N(v)$ denotes the neighbors of $v$ in graph $G$. The readout phase computes a feature vector for the whole graph using some readout function $R$ according to $$\hat{y} = R(\{ h_{v}^{T} | v \in G \})$$ The message functions $M_{t}$, vertex update functions $U_{t}$, and readout function $R$ are all learned differentiable functions. $R$ operates on the set of node states and must be invariant to permutations of the node states in order for the MPNN to be invariant to graph isomorphism.

#### Papers

Paper Code Results Date Stars