Incorporating Heterophily into Graph Neural Networks for Graph Classification

15 Mar 2022  ·  Wei Ye, Jiayi Yang, Sourav Medya, Ambuj Singh ·

Graph neural networks (GNNs) often assume strong homophily in graphs, seldom considering heterophily which means connected nodes tend to have different class labels and dissimilar features. In real-world scenarios, graphs may have nodes that exhibit both homophily and heterophily. Failing to generalize to this setting makes many GNNs underperform in graph classification. In this paper, we address this limitation by identifying two useful designs and develop a novel GNN architecture called IHGNN (Incorporating Heterophily into Graph Neural Networks). These designs include integration and separation of the ego- and neighbor-embeddings of nodes; and concatenation of all the node embeddings as the final graph-level readout function. In the first design, integration is combined with separation by an injective function which is the composition of the MLP and the concatenation function. The second design enables the graph-level readout function to differentiate between different node embeddings. As the functions used in both the designs are injective, IHGNN, while being simple, has an expressiveness as powerful as the 1-WL. We empirically validate IHGNN on various graph datasets and demonstrate that it achieves state-of-the-art performance on the graph classification task.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here