Strategies for Pre-training Graph Neural Networks

Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naive strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Drug Discovery BACE ContextPred AUC 0.845 # 5
Molecular Property Prediction BACE PretrainGNN ROC-AUC 84.5 # 4
Drug Discovery BBBP ContextPred AUC 0.687 # 4
Molecular Property Prediction BBBP PretrainGNN ROC-AUC 68.7 # 12
Drug Discovery ClinTox ContextPred AUC 0.726 # 3
Molecular Property Prediction ClinTox PretrainGNN ROC-AUC 72.6 # 14
Molecular Property Prediction FreeSolv PretrainGNN RMSE 2.764 # 8
Drug Discovery HIV dataset ContextPred AUC 0.799 # 4
Molecular Property Prediction Lipophilicity PretrainGNN RMSE 0.739 # 5
Drug Discovery MUV ContextPred AUC 0.813 # 4
Molecular Property Prediction QM8 PretrainGNN MAE 0.0200 # 4
Molecular Property Prediction QM9 PretrainGNN MAE 0.00922 # 4
Drug Discovery SIDER ContextPred AUC 0.627 # 4
Molecular Property Prediction SIDER PretrainGNN ROC-AUC 62.7 # 12
Drug Discovery Tox21 ContextPred AUC 0.781 # 9
Molecular Property Prediction Tox21 PretrainGNN ROC-AUC 78.1 # 5
Drug Discovery ToxCast ContextPred AUC 0.657 # 5
Molecular Property Prediction ToxCast PretrainGNN ROC-AUC 65.7 # 3

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Molecular Property Prediction QM7 PretrainGNN MAE 113.2 # 8

Methods


No methods listed for this paper. Add relevant methods here