Graph Attention Networks

ICLR 2018 Petar VeličkovićGuillem CucurullArantxa CasanovaAdriana RomeroPietro LiòYoshua Bengio

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Compare
Node Classification CiteSeer (0.5%) GAT Accuracy 38.2% # 13
Node Classification CiteSeer (1%) GAT Accuracy 46.5% # 13
Node Classification CiteSeer with Public Split: fixed 20 nodes per class GAT Accuracy 72.5% # 7
Document Classification Cora GAT Accuracy 83.0% # 3
Node Classification Cora (0.5%) GAT Accuracy 41.4% # 12
Node Classification Cora (1%) GAT Accuracy 48.6% # 13
Node Classification Cora (3%) GAT Accuracy 56.8% # 14
Node Classification Cora with Public Split: fixed 20 nodes per class GAT Accuracy 83.0% # 6
Skeleton Based Action Recognition J-HMBD Early Action GAT 10% 58.1 # 2
Graph Regression Lipophilicity GAT RMSE 0.950 # 8
Node Classification PPI GAT F1 97.3 # 7
Node Classification PubMed (0.03%) GAT Accuracy 50.9% # 12
Node Classification PubMed (0.05%) GAT Accuracy 50.4% # 13
Node Classification PubMed (0.1%) GAT Accuracy 59.6% # 13
Node Classification PubMed with Public Split: fixed 20 nodes per class GAT Accuracy 79.0% # 5