Graph Models

Graph Attention Network v2

Introduced by Brody et al. in How Attentive are Graph Attention Networks?

The GATv2 operator from the “How Attentive are Graph Attention Networks?” paper, which fixes the static attention problem of the standard GAT layer: since the linear layers in the standard GAT are applied right after each other, the ranking of attended nodes is unconditioned on the query node. In contrast, in GATv2, every node can attend to any other node.

GATv2 scoring function:

$e_{i,j} =\mathbf{a}^{\top}\mathrm{LeakyReLU}\left(\mathbf{W}[\mathbf{h}_i \, \Vert \,\mathbf{h}_j]\right)$

Source: How Attentive are Graph Attention Networks?

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Graph Attention 4 40.00%
Graph Classification 1 10.00%
Graph Learning 1 10.00%
Node Classification 1 10.00%
whole slide images 1 10.00%
Graph Property Prediction 1 10.00%
Link Prediction 1 10.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories