no code implementations • 22 Mar 2024 • Zhenbang Xiao, Yu Wang, Shunyu Liu, Huiqiong Wang, Mingli Song, Tongya Zheng
The burdensome training costs on large-scale graphs have aroused significant interest in graph condensation, which involves tuning Graph Neural Networks (GNNs) on a small condensed graph for use on the large-scale original graph.
1 code implementation • 4 Mar 2024 • Yu Wang, Tongya Zheng, Yuxuan Liang, Shunyu Liu, Mingli Song
To address these challenges, we have tailored a Cross-city mObiLity trAnsformer (COLA) with a dedicated model-agnostic transfer framework by effectively transferring cross-city knowledge for human trajectory simulation.
1 code implementation • 18 Jan 2024 • Zhenbang Xiao, Shunyu Liu, Yu Wang, Tongya Zheng, Mingli Song
Graph condensation has emerged as an intriguing technique to provide Graph Neural Networks for large-scale graphs with a more compact yet informative small graph to save the expensive costs of large-scale graph learning.
no code implementations • 28 Nov 2023 • Yaoquan Wei, Shunyu Liu, Jie Song, Tongya Zheng, KaiXuan Chen, Yong Wang, Mingli Song
Instead, we employ a proxy model to extract state features that are both discriminative (adaptive to the agent) and generally applicable (robust to agent noise).
1 code implementation • 26 Jul 2023 • Tongya Zheng, Tianli Zhang, Qingzheng Guan, Wenjie Huang, Zunlei Feng, Mingli Song, Chun Chen
Therefore, we firstly generate a dataset with 45, 000 numerical simulations and 900 particle types to facilitate the research progress of machine learning for particle crushing.
no code implementations • 15 Jun 2023 • Yu Wang, Tongya Zheng, Shunyu Liu, KaiXuan Chen, Zunlei Feng, Yunzhi Hao, Mingli Song
The human mobility simulation task aims to generate human mobility trajectories given a small set of trajectory data, which have aroused much concern due to the scarcity and sparsity of human mobility data.
1 code implementation • 31 May 2023 • KaiXuan Chen, Shunyu Liu, Tongtian Zhu, Tongya Zheng, Haofei Zhang, Zunlei Feng, Jingwen Ye, Mingli Song
Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.
1 code implementation • 27 May 2023 • Yihe Zhou, Shunyu Liu, Yunpeng Qing, KaiXuan Chen, Tongya Zheng, Yanhao Huang, Jie Song, Mingli Song
Despite the encouraging results achieved, CTDE makes an independence assumption on agent policies, which limits agents to adopt global cooperative information from each other during centralized training.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • 15 Apr 2023 • Tongya Zheng, Xinchao Wang, Zunlei Feng, Jie Song, Yunzhi Hao, Mingli Song, Xingen Wang, Xinyu Wang, Chun Chen
The whole temporal neighborhood of nodes reveals the varying preferences of nodes.
1 code implementation • 15 Apr 2023 • Tongya Zheng, Zunlei Feng, Tianli Zhang, Yunzhi Hao, Mingli Song, Xingen Wang, Xinyu Wang, Ji Zhao, Chun Chen
The proposed TIP-GNN focuses on the bilevel graph structure in temporal networks: besides the explicit interaction graph, a node's sequential interactions can also be constructed as a transition graph.
1 code implementation • 27 Feb 2023 • Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao
It then provides a comprehensive summary of three types of Graph-based Knowledge Distillation methods, namely Graph-based Knowledge Distillation for deep neural networks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and Self-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).
1 code implementation • 23 Nov 2022 • Shunyu Liu, Yihe Zhou, Jie Song, Tongya Zheng, KaiXuan Chen, Tongtian Zhu, Zunlei Feng, Mingli Song
Value Decomposition (VD) aims to deduce the contributions of agents for decentralized policies in the presence of only global rewards, and has recently emerged as a powerful credit assignment paradigm for tackling cooperative Multi-Agent Reinforcement Learning (MARL) problems.
no code implementations • 25 Jul 2022 • Jing Liu, Tongya Zheng, Qinfen Hao
To the best of our knowledge, we are the first to propose a HIgh-order RElational (HIRE) knowledge distillation framework on heterogeneous graphs, which can significantly boost the prediction performance regardless of model architectures of HGNNs.
1 code implementation • 23 Nov 2021 • Tongya Zheng, Zunlei Feng, Yu Wang, Chengchao Shen, Mingli Song, Xingen Wang, Xinyu Wang, Chun Chen, Hao Xu
Our proposed Dynamic Preference Structure (DPS) framework consists of two stages: structure sampling and graph fusion.