Search Results for author: Qincheng Lu

Found 10 papers, 2 papers with code

GCEPNet: Graph Convolution-Enhanced Expectation Propagation for Massive MIMO Detection

no code implementations23 Apr 2024 Qincheng Lu, Sitao Luan, Xiao-Wen Chang

To our knowledge, we are the first to shed light on the connection between the system model and graph convolution, and the first to design the data-dependent attention scores for graph convolution.

Representation Learning on Heterophilic Graph with Directional Neighborhood Attention

no code implementations3 Mar 2024 Qincheng Lu, Jiaqi Zhu, Sitao Luan, Xiao-Wen Chang

However, since it only incorporates information from immediate neighborhood, it lacks the ability to capture long-range and global graph information, leading to unsatisfactory performance on some datasets, particularly on heterophilic graphs.

Graph Attention Representation Learning

Extrapolatable Transformer Pre-training for Ultra Long Time-Series Forecasting

no code implementations29 Nov 2023 Ziyang Song, Qincheng Lu, Hao Xu, David L. Buckeridge, Yue Li

This underscores the limitations of the existing transformer-based architectures, particularly their scalability to handle large-scale data and ability to capture long-term temporal dependencies.

Time Series Time Series Forecasting +1

When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability

1 code implementation25 Apr 2023 Sitao Luan, Chenqing Hua, Minkai Xu, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Jie Fu, Jure Leskovec, Doina Precup

Homophily principle, i. e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks.

Node Classification Stochastic Block Model

When Do We Need Graph Neural Networks for Node Classification?

no code implementations30 Oct 2022 Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Doina Precup

Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by additionally making use of graph structure based on the relational inductive bias (edge bias), rather than treating the nodes as collections of independent and identically distributed (i. i. d.)

Classification Inductive Bias +1

Revisiting Heterophily For Graph Neural Networks

1 code implementation14 Oct 2022 Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup

ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs and is easy to be implemented in baseline GNN layers.

Inductive Bias Node Classification on Non-Homophilic (Heterophilic) Graphs

Cannot find the paper you are looking for? You can Submit a new open access paper.