Mutual Information Maximization in Graph Neural Networks

21 May 2019  ·  Xinhan Di, Pengqian Yu, Rui Bu, Mingchao Sun ·

A variety of graph neural networks (GNNs) frameworks for representation learning on graphs have been recently developed. These frameworks rely on aggregation and iteration scheme to learn the representation of nodes. However, information between nodes is inevitably lost in the scheme during learning. In order to reduce the loss, we extend the GNNs frameworks by exploring the aggregation and iteration scheme in the methodology of mutual information. We propose a new approach of enlarging the normal neighborhood in the aggregation of GNNs, which aims at maximizing mutual information. Based on a series of experiments conducted on several benchmark datasets, we show that the proposed approach improves the state-of-the-art performance for four types of graph tasks, including supervised and semi-supervised graph classification, graph link prediction and graph edge generation and classification.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Graph Classification 20NEWS sKNN-LDS Accuracy 47.9 # 1
Graph Classification Cancer sKNN-LDS Accuracy 95.7 # 1
Graph Classification Citeseer sKNN-LDS Accuracy 73.7 # 1
Graph Classification COLLAB sGIN Accuracy 80.71% # 10
Graph Classification Cora sKNN-LDS Accuracy 72.3 # 1
Graph Classification Digits sKNN-LDS Accuracy 92.5 # 1
Graph Classification IMDb-B sGIN Accuracy 77.94% # 6
Graph Classification IMDb-M sGIN Accuracy 54.52% # 7
Graph Classification MUTAG sGIN Accuracy 94.14% # 8
Graph Classification NCI1 sGIN Accuracy 83.85% # 16
Graph Classification PROTEINS sGIN Accuracy 78.97% # 10
Graph Classification PTC sGIN Accuracy 73.56% # 5
Link Prediction Pubmed sGraphite-VAE AUC 94.8% # 7
AP 96.3% # 6
Graph Classification Wine sKNN-LDS Accuracy 98 # 1


No methods listed for this paper. Add relevant methods here