Universal Graph Transformer Self-Attention Networks

26 Sep 2019  ·  Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung ·

The transformer self-attention network has been extensively used in research domains such as computer vision, image processing, and natural language processing. But it has not been actively used in graph neural networks (GNNs) where constructing an advanced aggregation function is essential. To this end, we present U2GNN, an effective GNN model leveraging a transformer self-attention mechanism followed by a recurrent transition, to induce a powerful aggregation function to learn graph representations. Experimental results show that the proposed U2GNN achieves state-of-the-art accuracies on well-known benchmark datasets for graph classification. Our code is available at: https://github.com/daiquocnguyen/Graph-Transformer

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Graph Classification COLLAB U2GNN (Unsupervised) Accuracy 95.62% # 1
Graph Classification COLLAB U2GNN Accuracy 77.84% # 19
Graph Classification D&D U2GNN Accuracy 80.23% # 13
Graph Classification D&D U2GNN (Unsupervised) Accuracy 95.67% # 1
Graph Classification IMDb-B U2GNN (Unsupervised) Accuracy 96.41% # 1
Graph Classification IMDb-B U2GNN Accuracy 77.04% # 8
Graph Classification IMDb-M U2GNN Accuracy 53.60% # 9
Graph Classification IMDb-M U2GNN (Unsupervised) Accuracy 89.20% # 1
Graph Classification MUTAG U2GNN Accuracy 89.97% # 24
Graph Classification MUTAG U2GNN (Unsupervised) Accuracy 88.47% # 34
Graph Classification PROTEINS U2GNN Accuracy 78.53% # 15
Graph Classification PROTEINS U2GNN (Unsupervised) Accuracy 80.01% # 9
Graph Classification PTC U2GNN Accuracy 69.63% # 13
Graph Classification PTC U2GNN (Unsupervised) Accuracy 91.81% # 1

Methods