Universal Graph Transformer Self-Attention Networks

26 Sep 2019 Dai Quoc Nguyen Tu Dinh Nguyen Dinh Phung

The transformer self-attention network has been extensively used in research domains such as computer vision, image processing, and natural language processing. But it has not been actively used in graph neural networks (GNNs) where constructing an advanced aggregation function is essential... (read more)

PDF Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Graph Classification COLLAB U2GNN (Unsupervised) Accuracy 95.62% # 1
Graph Classification COLLAB U2GNN Accuracy 77.84% # 16
Graph Classification D&D U2GNN Accuracy 80.23% # 11
Graph Classification D&D U2GNN (Unsupervised) Accuracy 95.67% # 1
Graph Classification IMDb-B U2GNN (Unsupervised) Accuracy 96.41% # 1
Graph Classification IMDb-B U2GNN Accuracy 77.04% # 6
Graph Classification IMDb-M U2GNN Accuracy 53.60% # 5
Graph Classification IMDb-M U2GNN (Unsupervised) Accuracy 89.20% # 1
Graph Classification MUTAG U2GNN (Unsupervised) Accuracy 88.47% # 23
Graph Classification MUTAG U2GNN Accuracy 89.97% # 14
Graph Classification PROTEINS U2GNN (Unsupervised) Accuracy 80.01% # 3
Graph Classification PROTEINS U2GNN Accuracy 78.53% # 8
Graph Classification PTC U2GNN Accuracy 69.63% # 9
Graph Classification PTC U2GNN (Unsupervised) Accuracy 91.81% # 1

Methods used in the Paper


METHOD TYPE
Residual Connection
Skip Connections
BPE
Subword Segmentation
Dense Connections
Feedforward Networks
Label Smoothing
Regularization
ReLU
Activation Functions
Adam
Stochastic Optimization
Softmax
Output Functions
Dropout
Regularization
Multi-Head Attention
Attention Modules
Layer Normalization
Normalization
Scaled Dot-Product Attention
Attention Mechanisms
Transformer
Transformers