Recent research on graph neural networks has made significant advances in learning representations for classification and regression on graphs. However, time series prediction on graphs is still limited to attribute changes, link prediction, or graph generation. In this paper, we connect graph neural networks to the long-standing research tradition of graph edits. Graph edits are expressive enough to model any graph change and are typically highly sparse, facilitating interpretation and reducing the time complexity from quadratic to linear. We propose to augment graph neural nets with a simple linear output layer - which we call the graph edit network - to predict graph edits. Our key contribution is a proof that graph edit networks are expressive enough to edit any graph into any other using almost as few edits as possible, i.e. we prove that graph edit networks can approximate the NP-hard graph edit distance. With this result, we hope to provide a firm theoretical basis for a next generation of time series prediction models. We further provide an experimental proof-of-concept by verifying that graph neural nets with our output layer can learn a variety of graph dynamical systems, which are difficult to learn for baselines in the literature.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here