Graph Models

Graph Attention Network v2

Introduced by Brody et al. in How Attentive are Graph Attention Networks?

The GATv2 operator from the “How Attentive are Graph Attention Networks?” paper, which fixes the static attention problem of the standard GAT layer: since the linear layers in the standard GAT are applied right after each other, the ranking of attended nodes is unconditioned on the query node. In contrast, in GATv2, every node can attend to any other node.

GATv2 scoring function:

$e_{i,j} =\mathbf{a}^{\top}\mathrm{LeakyReLU}\left(\mathbf{W}[\mathbf{h}_i \, \Vert \,\mathbf{h}_j]\right)$

Source: How Attentive are Graph Attention Networks?


Paper Code Results Date Stars



Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign