Beyond Message Passing Paradigm: Training Graph Data with Consistency Constraints

29 Sep 2021  ·  Lirong Wu, Stan Z. Li ·

Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). However, most existing GNNs are based on powerful message passing to guide feature aggregation among neighbors. Despite their success, there still exist three weaknesses that limit their capacity to train graph data: weak generalization with severely limited labeled data, poor robustness to label noise and structure disturbation, and high computation and memory burden for keeping the entire graph. In this paper, we propose a simple yet effective Graph Consistency Learning (GCL) framework, which is based purely on multilayer perceptrons, where structure information is only implicitly incorporated as prior knowledge in the computation of supervision signals but does not explicitly involve the forward. Specifically, the GCL framework is optimized with three well-designed consistency constraints: neighborhood consistency, label consistency, and class-center consistency. More importantly, we provide theoretical analysis on the connections between message passing and consistency constraints. Extensive experiments show that GCL produces truly encouraging performance with better generalization and robustness compared with other leading methods.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here