no code implementations • NeurIPS 2023 • Filip Ekström Kelvinius, Dimitar Georgiev, Artur Petrov Toshev, Johannes Gasteiger
In this paper, we explore the utility of knowledge distillation (KD) for accelerating molecular GNNs.
Data Augmentation Knowledge Distillation +2