Recent works reveal that feature or label smoothing lies at the core of Graph Neural Networks (GNNs).
First, GNNs can learn higher-order structural information by stacking more layers but can not deal with large depth due to the over-smoothing issue.
Graph neural networks (GNNs) have recently achieved state-of-the-art performance in many graph-based applications.
Based on the experimental results, we answer the following two essential questions: (1) what actually leads to the compromised performance of deep GNNs; (2) when we need and how to build deep GNNs.
Unfortunately, many real-world networks are sparse in terms of both edges and labels, leading to sub-optimal performance of GNNs.