Paper

Universal Graph Transformer Self-Attention Networks

The transformer self-attention network has been extensively used in research domains such as computer vision, image processing, and natural language processing. But it has not been actively used in graph neural networks (GNNs) where constructing an advanced aggregation function is essential. To this end, we present U2GNN, an effective GNN model leveraging a transformer self-attention mechanism followed by a recurrent transition, to induce a powerful aggregation function to learn graph representations. Experimental results show that the proposed U2GNN achieves state-of-the-art accuracies on well-known benchmark datasets for graph classification. Our code is available at: https://github.com/daiquocnguyen/Graph-Transformer

Results in Papers With Code
(↓ scroll down to see all results)