no code implementations • 9 May 2023 • Jingbo Zhou, Yixuan Du, Ruqiong Zhang, Di Jin, Carl Yang, Rui Zhang
Based on this, we propose a sampling-based node-level residual module (SNR) that can achieve a more flexible utilization of different hops of subgraph aggregation by introducing node-level parameters sampled from a learnable distribution.
no code implementations • 16 Jan 2023 • Jingbo Zhou, Yixuan Du, Ruqiong Zhang, Rui Zhang
As one of the most popular GNN architectures, the graph attention networks (GAT) is considered the most advanced learning architecture for graph representation and has been widely used in various graph mining tasks with impressive results.