Search Results for author: Wenlei Shi

Found 5 papers, 3 papers with code

NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with Spatial-temporal Decomposition

no code implementations20 Feb 2023 Xinquan Huang, Wenlei Shi, Qi Meng, Yue Wang, Xiaotian Gao, Jia Zhang, Tie-Yan Liu

Neural networks have shown great potential in accelerating the solution of partial differential equations (PDEs).

Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation

1 code implementation10 Feb 2023 Rui Zhang, Qi Meng, Rongchan Zhu, Yue Wang, Wenlei Shi, Shihua Zhang, Zhi-Ming Ma, Tie-Yan Liu

To address these limitations, we propose the Monte Carlo Neural PDE Solver (MCNP Solver) for training unsupervised neural solvers via the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.

LordNet: Learning to Solve Parametric Partial Differential Equations without Simulated Data

no code implementations19 Jun 2022 Wenlei Shi, Xinquan Huang, Xiaotian Gao, Xinran Wei, Jia Zhang, Jiang Bian, Mao Yang, Tie-Yan Liu

Neural operators, as a powerful approximation to the non-linear operators between infinite-dimensional function spaces, have proved to be promising in accelerating the solution of partial differential equations (PDE).

Learning Physics-Informed Neural Networks without Stacked Back-propagation

1 code implementation18 Feb 2022 Di He, Shanda Li, Wenlei Shi, Xiaotian Gao, Jia Zhang, Jiang Bian, LiWei Wang, Tie-Yan Liu

In this work, we develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.

Cooperative Policy Learning with Pre-trained Heterogeneous Observation Representations

1 code implementation24 Dec 2020 Wenlei Shi, Xinran Wei, Jia Zhang, Xiaoyuan Ni, Arthur Jiang, Jiang Bian, Tie-Yan Liu

While adopting complex GNN models with more informative message passing and aggregation mechanisms can obviously benefit heterogeneous vertex representations and cooperative policy learning, it could, on the other hand, increase the training difficulty of MARL and demand more intense and direct reward signals compared to the original global reward.

Graph Attention Multi-agent Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.