Through deconstructing the message passing mechanism, PasCa presents a novel Scalable Graph Neural Architecture Paradigm (SGAP), together with a general architecture design space consisting of 150k different designs.
For example, the distributed K-core decomposition algorithm can scale to a large graph with 136 billion edges without losing correctness with our divide-and-conquer technique.
Embedding models have been an effective learning paradigm for high-dimensional data.
Recent works reveal that feature or label smoothing lies at the core of Graph Neural Networks (GNNs).
Graph neural networks (GNNs) have recently achieved state-of-the-art performance in many graph-based applications.
In recent studies, neural message passing has proved to be an effective way to design graph neural networks (GNNs), which have achieved state-of-the-art performance in many graph-based tasks.