no code implementations • 31 Oct 2023 • Gaichao Li, Jinsong Chen, John E. Hopcroft, Kun He
Graph pooling methods have been widely used on downsampling graphs, achieving impressive results on multiple graph-level tasks like graph classification and graph generation.
no code implementations • 17 Oct 2023 • Jinsong Chen, Gaichao Li, John E. Hopcroft, Kun He
In this way, SignGT could learn informative node representations from both long-range dependencies and local topology information.
Ranked #4 on Node Classification on Actor
no code implementations • 22 May 2023 • Jinsong Chen, Chang Liu, Kaiyuan Gao, Gaichao Li, Kun He
Graph Transformers, emerging as a new architecture for graph representation learning, suffer from the quadratic complexity on the number of nodes when handling large graphs.
no code implementations • 15 Nov 2022 • Gaichao Li, Jinsong Chen, Kun He
MNA-GT further employs an attention layer to learn the importance of different attention kernels to enable the model to adaptively capture the graph structural information for different nodes.
1 code implementation • 10 Jun 2022 • Jinsong Chen, Kaiyuan Gao, Gaichao Li, Kun He
In this work, we observe that existing graph Transformers treat nodes as independent tokens and construct a single long sequence composed of all node tokens so as to train the Transformer model, causing it hard to scale to large graphs due to the quadratic complexity on the number of nodes for the self-attention computation.