no code implementations • 3 Mar 2024 • Qincheng Lu, Jiaqi Zhu, Sitao Luan, Xiao-Wen Chang
However, since it only incorporates information from immediate neighborhood, it lacks the ability to capture long-range and global graph information, leading to unsatisfactory performance on some datasets, particularly on heterophilic graphs.
no code implementations • 22 Jun 2023 • Sitao Luan
This report gives a summary of two problems about graph convolutional networks (GCNs): over-smoothing and heterophily challenges, and outlines future directions to explore.
no code implementations • 28 Apr 2023 • Chenqing Hua, Sitao Luan, Minkai Xu, Rex Ying, Jie Fu, Stefano Ermon, Doina Precup
Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.
1 code implementation • 25 Apr 2023 • Sitao Luan, Chenqing Hua, Minkai Xu, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Jie Fu, Jure Leskovec, Doina Precup
Homophily principle, i. e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks.
no code implementations • 21 Dec 2022 • Sitao Luan, Mingde Zhao, Chenqing Hua, Xiao-Wen Chang, Doina Precup
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes.
no code implementations • 30 Oct 2022 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Doina Precup
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by additionally making use of graph structure based on the relational inductive bias (edge bias), rather than treating the nodes as collections of independent and identically distributed (i. i. d.)
1 code implementation • 14 Oct 2022 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs and is easy to be implemented in baseline GNN layers.
Inductive Bias Node Classification on Non-Homophilic (Heterophilic) Graphs
no code implementations • 24 May 2022 • Chenqing Hua, Sitao Luan, Qian Zhang, Jie Fu
Graph Neural Networks (GNNs) are new inference methods developed in recent years and are attracting growing attention due to their effectiveness and flexibility in solving inference and learning problems over graph-structured data.
no code implementations • 29 Sep 2021 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation.
no code implementations • 12 Sep 2021 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation.
Ranked #1 on Node Classification on Pubmed
1 code implementation • NeurIPS 2021 • Mingde Zhao, Zhen Liu, Sitao Luan, Shuyuan Zhang, Doina Precup, Yoshua Bengio
We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state during planning.
Model-based Reinforcement Learning Out-of-Distribution Generalization +2
no code implementations • NeurIPS 2021 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation.
no code implementations • 20 Aug 2020 • Sitao Luan, Mingde Zhao, Chenqing Hua, Xiao-Wen Chang, Doina Precup
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood node information.
no code implementations • 20 Aug 2020 • Sitao Luan, Mingde Zhao, Xiao-Wen Chang, Doina Precup
The performance limit of Graph Convolutional Networks (GCNs) and the fact that we cannot stack more of them to increase the performance, which we usually do for other deep learning paradigms, are pervasively thought to be caused by the limitations of the GCN layers, including insufficient expressive power, etc.
no code implementations • 19 Sep 2019 • Sitao Luan, Xiao-Wen Chang, Doina Precup
In tabular case, when the reward and environment dynamics are known, policy evaluation can be written as $\bm{V}_{\bm{\pi}} = (I - \gamma P_{\bm{\pi}})^{-1} \bm{r}_{\bm{\pi}}$, where $P_{\bm{\pi}}$ is the state transition matrix given policy ${\bm{\pi}}$ and $\bm{r}_{\bm{\pi}}$ is the reward signal given ${\bm{\pi}}$.
1 code implementation • NeurIPS 2019 • Sitao Luan, Mingde Zhao, Xiao-Wen Chang, Doina Precup
Recently, neural network based approaches have achieved significant improvement for solving large, complex, graph-structured problems.
Ranked #1 on Node Classification on PubMed (0.1%)
2 code implementations • 25 Apr 2019 • Mingde Zhao, Sitao Luan, Ian Porada, Xiao-Wen Chang, Doina Precup
Temporal-Difference (TD) learning is a standard and very successful reinforcement learning approach, at the core of both algorithms that learn the value of a given policy, as well as algorithms which learn how to improve policies.