Multi-Task Graph Autoencoders

7 Nov 2018  ·  Phi Vu Tran ·

We examine two fundamental tasks associated with graph representation learning: link prediction and node classification. We present a new autoencoder architecture capable of learning a joint representation of local graph structure and available node features for the simultaneous multi-task learning of unsupervised link prediction and semi-supervised node classification. Our simple, yet effective and versatile model is efficiently trained end-to-end in a single stage, whereas previous related deep graph embedding methods require multiple training steps that are difficult to optimize. We provide an empirical evaluation of our model on five benchmark relational, graph-structured datasets and demonstrate significant improvement over three strong baselines for graph representation learning. Reference code and data are available at https://github.com/vuptran/graph-representation-learning

PDF Abstract

Results from the Paper


 Ranked #1 on Link Prediction on Pubmed (Accuracy metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Link Prediction Citeseer MTGAE Accuracy 94.90% # 1
Node Classification Citeseer MTGAE Accuracy 71.80% # 49
Validation YES # 1
Link Prediction Cora MTGAE Accuracy 94.60% # 1
Node Classification Cora MTGAE Accuracy 79.00% # 67
Validation YES # 1
Link Prediction Pubmed MTGAE Accuracy 94.40% # 1
Node Classification Pubmed MTGAE Accuracy 80.40% # 32
Training Split 20 per node # 1
Validation YES # 1

Methods


No methods listed for this paper. Add relevant methods here