ChemBERTa-2: Towards Chemical Foundation Models

5 Sep 2022  ·  Walid Ahmad, Elana Simon, Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar ·

Large pretrained models such as GPT-3 have had tremendous impact on modern natural language processing by leveraging self-supervised learning to learn salient representations that can be used to readily finetune on a wide variety of downstream tasks. We investigate the possibility of transferring such advances to molecular machine learning by building a chemical foundation model, ChemBERTa-2, using the language of SMILES. While labeled data for molecular prediction tasks is typically scarce, libraries of SMILES strings are readily available. In this work, we build upon ChemBERTa by optimizing the pretraining process. We compare multi-task and self-supervised pretraining by varying hyperparameters and pretraining dataset size, up to 77M compounds from PubChem. To our knowledge, the 77M set constitutes one of the largest datasets used for molecular pretraining to date. We find that with these pretraining improvements, we are competitive with existing state-of-the-art architectures on the MoleculeNet benchmark suite. We analyze the degree to which improvements in pretraining translate to improvement on downstream tasks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Molecular Property Prediction BACE ChemBERTa-2 (MTR-77M) ROC-AUC 79.9 # 9
RMSE 1.363 # 2
Molecular Property Prediction BBBP ChemBERTa-2 (MTR-77M) ROC-AUC 72.8 # 5
Molecular Property Prediction Clearance ChemBERTa-2 (MTR-77M) RMSE 48.515 # 2
Molecular Property Prediction ClinTox ChemBERTa-2 (MTR-77M) ROC-AUC 56.3 # 16
Molecules (M) 77 # 1
Molecular Property Prediction ESOL ChemBERTa-2 (MTR-77M) RMSE 0.889 # 4
Molecular Property Prediction Lipophilicity ChemBERTa-2 (MTR-77M) RMSE 0.798 # 6

Methods