Search Results for author: Xiaosen Li

Found 5 papers, 2 papers with code

GMLP: Building Scalable and Flexible Graph Neural Networks with Feature-Message Passing

no code implementations20 Apr 2021 Wentao Zhang, Yu Shen, Zheyu Lin, Yang Li, Xiaosen Li, Wen Ouyang, Yangyu Tao, Zhi Yang, Bin Cui

In recent studies, neural message passing has proved to be an effective way to design graph neural networks (GNNs), which have achieved state-of-the-art performance in many graph-based tasks.

Graph Attention MLP with Reliable Label Utilization

no code implementations23 Aug 2021 Wentao Zhang, Ziqi Yin, Zeang Sheng, Wen Ouyang, Xiaosen Li, Yangyu Tao, Zhi Yang, Bin Cui

Graph neural networks (GNNs) have recently achieved state-of-the-art performance in many graph-based applications.

Graph Attention

K-Core Decomposition on Super Large Graphs with Limited Resources

no code implementations26 Dec 2021 Shicheng Gao, Jie Xu, Xiaosen Li, Fangcheng Fu, Wentao Zhang, Wen Ouyang, Yangyu Tao, Bin Cui

For example, the distributed K-core decomposition algorithm can scale to a large graph with 136 billion edges without losing correctness with our divide-and-conquer technique.

PaSca: a Graph Neural Architecture Search System under the Scalable Paradigm

1 code implementation1 Mar 2022 Wentao Zhang, Yu Shen, Zheyu Lin, Yang Li, Xiaosen Li, Wen Ouyang, Yangyu Tao, Zhi Yang, Bin Cui

Through deconstructing the message passing mechanism, PasCa presents a novel Scalable Graph Neural Architecture Paradigm (SGAP), together with a general architecture design space consisting of 150k different designs.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.