GTNet: A Tree-Based Deep Graph Learning Architecture

27 Apr 2022  ·  Nan Wu, Chaofan Wang ·

We propose Graph Tree Networks (GTNets), a deep graph learning architecture with a new general message passing scheme that originates from the tree representation of graphs. In the tree representation, messages propagate upward from the leaf nodes to the root node, and each node preserves its initial information prior to receiving information from its child nodes (neighbors). We formulate a general propagation rule following the nature of message passing in the tree to update a node's feature by aggregating its initial feature and its neighbor nodes' updated features. Two graph representation learning models are proposed within this GTNet architecture - Graph Tree Attention Network (GTAN) and Graph Tree Convolution Network (GTCN), with experimentally demonstrated state-of-the-art performance on several popular benchmark datasets. Unlike the vanilla Graph Attention Network (GAT) and Graph Convolution Network (GCN) which have the "over-smoothing" issue, the proposed GTAN and GTCN models can go deep as demonstrated by comprehensive experiments and rigorous theoretical analysis.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Property Prediction ogbn-arxiv GTAN Test Accuracy 0.7297 ± 0.0017 # 49
Validation Accuracy 0.7384 ± 0.0007 # 53
Number of params 39208 # 73
Ext. data No # 1
Node Property Prediction ogbn-arxiv GTCN Test Accuracy 0.7225 ± 0.0017 # 57
Validation Accuracy 0.7320 ± 0.0005 # 64
Number of params 109096 # 68
Ext. data No # 1

Methods