Graph-level Representation Learning with Joint-Embedding Predictive Architectures

27 Sep 2023  ·  Geri Skenderi, Hang Li, Jiliang Tang, Marco Cristani ·

Joint-Embedding Predictive Architectures (JEPAs) have recently emerged as a novel and powerful technique for self-supervised representation learning. They aim to learn an energy-based model by predicting the latent representation of a target signal $y$ from a context signal $x$. JEPAs bypass the need for data augmentation and negative samples, which are typically required by contrastive learning, while avoiding the overfitting issues associated with generative-based pretraining. In this paper, we show that graph-level representations can be effectively modeled using this paradigm and propose Graph-JEPA, the first JEPA for the graph domain. In particular, we employ masked modeling to learn embeddings for different subgraphs of the input graph. To endow the representations with the implicit hierarchy that is often present in graph-level concepts, we devise an alternative training objective that consists of predicting the coordinates of the encoded subgraphs on the unit hyperbola in the 2D plane. Extensive validation shows that Graph-JEPA can learn representations that are expressive and competitive in both graph classification and regression problems.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Graph Classification D&D Graph-JEPA Accuracy 78.64% # 21
Graph Classification IMDb-B Graph-JEPA Accuracy 73.68% # 19
Graph Classification IMDb-M Graph-JEPA Accuracy 50.69% # 18
Graph Classification MUTAG Graph-JEPA Accuracy 91.25% # 14
Graph Classification PROTEINS Graph-JEPA Accuracy 75.67% # 51
Graph Classification REDDIT-B Graph-JEPA Accuracy 56.73 # 11
Graph Regression ZINC Graph-JEPA MAE 0.434 # 21

Methods


No methods listed for this paper. Add relevant methods here