On the Effect of Input Perturbations for Graph Neural Networks

29 Sep 2021  ·  Behrooz Tahmasebi, Stefanie Jegelka ·

The expressive power of a message passing graph neural network (MPGNN) depends on its architecture and the input node attributes. In this work, we study how this interplay is affected by input perturbations. First, perturbations of node attributes may act as noise and hinder predictive power. But, perturbations can also aid expressiveness, by making nodes more identifiable. Recent works show that unique node IDs are necessary to represent certain functions with MPGNNs. Our results relate properties of the noise, smoothness of the model and the geometry of the input graphs and task. In particular, we take the perspective of lower bounding smoothness for achieving discrimination: how much output variation is needed for exploiting random node IDs, or for retaining discriminability? Our theoretical results imply constraints on the model for exploiting random node IDs, and, conversely, insights into the tolerance of a given model class for retaining discrimination with perturbations of node attributes.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods