Search Results for author: Tongya Zheng

Found 14 papers, 10 papers with code

Simple Graph Condensation

no code implementations22 Mar 2024 Zhenbang Xiao, Yu Wang, Shunyu Liu, Huiqiong Wang, Mingli Song, Tongya Zheng

The burdensome training costs on large-scale graphs have aroused significant interest in graph condensation, which involves tuning Graph Neural Networks (GNNs) on a small condensed graph for use on the large-scale original graph.

COLA: Cross-city Mobility Transformer for Human Trajectory Simulation

1 code implementation4 Mar 2024 Yu Wang, Tongya Zheng, Yuxuan Liang, Shunyu Liu, Mingli Song

To address these challenges, we have tailored a Cross-city mObiLity trAnsformer (COLA) with a dedicated model-agnostic transfer framework by effectively transferring cross-city knowledge for human trajectory simulation.

CoLA Transfer Learning

Disentangled Condensation for Large-scale Graphs

1 code implementation18 Jan 2024 Zhenbang Xiao, Shunyu Liu, Yu Wang, Tongya Zheng, Mingli Song

Graph condensation has emerged as an intriguing technique to provide Graph Neural Networks for large-scale graphs with a more compact yet informative small graph to save the expensive costs of large-scale graph learning.

Graph Learning Link Prediction +1

Agent-Aware Training for Agent-Agnostic Action Advising in Deep Reinforcement Learning

no code implementations28 Nov 2023 Yaoquan Wei, Shunyu Liu, Jie Song, Tongya Zheng, KaiXuan Chen, Yong Wang, Mingli Song

Instead, we employ a proxy model to extract state features that are both discriminative (adaptive to the agent) and generally applicable (robust to agent noise).

Atari Games

Graph Neural Networks-based Hybrid Framework For Predicting Particle Crushing Strength

1 code implementation26 Jul 2023 Tongya Zheng, Tianli Zhang, Qingzheng Guan, Wenjie Huang, Zunlei Feng, Mingli Song, Chun Chen

Therefore, we firstly generate a dataset with 45, 000 numerical simulations and 900 particle types to facilitate the research progress of machine learning for particle crushing.

Chemical Reaction Prediction

Spatiotemporal-Augmented Graph Neural Networks for Human Mobility Simulation

no code implementations15 Jun 2023 Yu Wang, Tongya Zheng, Shunyu Liu, KaiXuan Chen, Zunlei Feng, Yunzhi Hao, Mingli Song

The human mobility simulation task aims to generate human mobility trajectories given a small set of trajectory data, which have aroused much concern due to the scarcity and sparsity of human mobility data.

Improving Expressivity of GNNs with Subgraph-specific Factor Embedded Normalization

1 code implementation31 May 2023 KaiXuan Chen, Shunyu Liu, Tongtian Zhu, Tongya Zheng, Haofei Zhang, Zunlei Feng, Jingwen Ye, Mingli Song

Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.

Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?

1 code implementation27 May 2023 Yihe Zhou, Shunyu Liu, Yunpeng Qing, KaiXuan Chen, Tongya Zheng, Yanhao Huang, Jie Song, Mingli Song

Despite the encouraging results achieved, CTDE makes an independence assumption on agent policies, which limits agents to adopt global cooperative information from each other during centralized training.

Multi-agent Reinforcement Learning reinforcement-learning +2

Transition Propagation Graph Neural Networks for Temporal Networks

1 code implementation15 Apr 2023 Tongya Zheng, Zunlei Feng, Tianli Zhang, Yunzhi Hao, Mingli Song, Xingen Wang, Xinyu Wang, Ji Zhao, Chun Chen

The proposed TIP-GNN focuses on the bilevel graph structure in temporal networks: besides the explicit interaction graph, a node's sequential interactions can also be constructed as a transition graph.

Graph Mining Link Prediction +1

Graph-based Knowledge Distillation: A survey and experimental evaluation

1 code implementation27 Feb 2023 Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao

It then provides a comprehensive summary of three types of Graph-based Knowledge Distillation methods, namely Graph-based Knowledge Distillation for deep neural networks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and Self-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).

Self-Knowledge Distillation

Contrastive Identity-Aware Learning for Multi-Agent Value Decomposition

1 code implementation23 Nov 2022 Shunyu Liu, Yihe Zhou, Jie Song, Tongya Zheng, KaiXuan Chen, Tongtian Zhu, Zunlei Feng, Mingli Song

Value Decomposition (VD) aims to deduce the contributions of agents for decentralized policies in the presence of only global rewards, and has recently emerged as a powerful credit assignment paradigm for tackling cooperative Multi-Agent Reinforcement Learning (MARL) problems.

Contrastive Learning SMAC+

HIRE: Distilling High-order Relational Knowledge From Heterogeneous Graph Neural Networks

no code implementations25 Jul 2022 Jing Liu, Tongya Zheng, Qinfen Hao

To the best of our knowledge, we are the first to propose a HIgh-order RElational (HIRE) knowledge distillation framework on heterogeneous graphs, which can significantly boost the prediction performance regardless of model architectures of HGNNs.

Knowledge Distillation Vocal Bursts Intensity Prediction

Learning Dynamic Preference Structure Embedding From Temporal Networks

1 code implementation23 Nov 2021 Tongya Zheng, Zunlei Feng, Yu Wang, Chengchao Shen, Mingli Song, Xingen Wang, Xinyu Wang, Chun Chen, Hao Xu

Our proposed Dynamic Preference Structure (DPS) framework consists of two stages: structure sampling and graph fusion.

Graph Sampling

Cannot find the paper you are looking for? You can Submit a new open access paper.