no code implementations • 21 Dec 2022 • Sitao Luan, Mingde Zhao, Chenqing Hua, Xiao-Wen Chang, Doina Precup
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes.
1 code implementation • 14 Oct 2022 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs and is easy to be implemented in baseline GNN layers.
Inductive Bias
Node Classification on Non-Homophilic (Heterophilic) Graphs
no code implementations • 21 Mar 2022 • Akram Erraqabi, Marlos C. Machado, Mingde Zhao, Sainbayar Sukhbaatar, Alessandro Lazaric, Ludovic Denoyer, Yoshua Bengio
In reinforcement learning, the graph Laplacian has proved to be a valuable tool in the task-agnostic setting, with applications ranging from skill discovery to reward shaping.
no code implementations • 29 Sep 2021 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation.
no code implementations • 12 Sep 2021 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation.
Ranked #1 on
Node Classification
on Pubmed
no code implementations • ICML Workshop URL 2021 • Akram Erraqabi, Mingde Zhao, Marlos C. Machado, Yoshua Bengio, Sainbayar Sukhbaatar, Ludovic Denoyer, Alessandro Lazaric
In this work, we introduce a method that explicitly couples representation learning with exploration when the agent is not provided with a uniform prior over the state space.
1 code implementation • NeurIPS 2021 • Mingde Zhao, Zhen Liu, Sitao Luan, Shuyuan Zhang, Doina Precup, Yoshua Bengio
We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state during planning.
Model-based Reinforcement Learning
Out-of-Distribution Generalization
+2
no code implementations • NeurIPS 2021 • Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation.
no code implementations • 20 Aug 2020 • Sitao Luan, Mingde Zhao, Xiao-Wen Chang, Doina Precup
The performance limit of Graph Convolutional Networks (GCNs) and the fact that we cannot stack more of them to increase the performance, which we usually do for other deep learning paradigms, are pervasively thought to be caused by the limitations of the GCN layers, including insufficient expressive power, etc.
no code implementations • 20 Aug 2020 • Sitao Luan, Mingde Zhao, Chenqing Hua, Xiao-Wen Chang, Doina Precup
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood node information.
1 code implementation • 16 Jun 2020 • Mingde Zhao
Our approach can be used both in on-policy and off-policy learning.
no code implementations • ICCV 2019 • Hongwei Ge, Zehang Yan, Kai Zhang, Mingde Zhao, Liang Sun
In the training process, the forward and backward LSTMs encode the succeeding and preceding words into their respective hidden states by simultaneously constructing the whole sentence in a complementary manner.
1 code implementation • NeurIPS 2019 • Sitao Luan, Mingde Zhao, Xiao-Wen Chang, Doina Precup
Recently, neural network based approaches have achieved significant improvement for solving large, complex, graph-structured problems.
Ranked #1 on
Node Classification
on PubMed (0.1%)
2 code implementations • 25 Apr 2019 • Mingde Zhao, Sitao Luan, Ian Porada, Xiao-Wen Chang, Doina Precup
Temporal-Difference (TD) learning is a standard and very successful reinforcement learning approach, at the core of both algorithms that learn the value of a given policy, as well as algorithms which learn how to improve policies.
no code implementations • 12 Apr 2019 • Mingde Zhao, Hongwei Ge, Kai Zhang, Yaqing Hou
The infeasible parts of the objective space in difficult many-objective optimization problems cause trouble for evolutionary algorithms.
no code implementations • 17 Dec 2018 • Mingde Zhao, Hongwei Ge, Yi Lian, Kai Zhang
The generalization abilities of heuristic optimizers may deteriorate with the increment of the search space dimensionality.
no code implementations • 3 Mar 2018 • Hongwei Ge, Mingde Zhao, Liang Sun, Zhen Wang, Guozhen Tan, Qiang Zhang, C. L. Philip Chen
This paper proposes a many-objective optimization algorithm with two interacting processes: cascade clustering and reference point incremental learning (CLIA).