1 code implementation • 17 Feb 2024 • Xili Wang, Kejun Tang, Jiayu Zhai, Xiaoliang Wan, Chao Yang
In this work, we present a deep adaptive sampling method for surrogate modeling ($\text{DAS}^2$), where we generalize the deep adaptive sampling (DAS) method [62] [Tang, Wan and Yang, 2023] to build surrogate models for low-regularity parametric differential equations.
no code implementations • 26 Oct 2023 • Xiaoliang Wan, Tao Zhou, Yuancheng Zhou
The first step is solving the PDEs using the Deep Ritz method by minimizing an associated variational loss discretized by the collocation points in the training set.
no code implementations • 30 May 2023 • Kejun Tang, Jiayu Zhai, Xiaoliang Wan, Chao Yang
The key idea is to use a deep generative model to adjust random samples in the training set such that the residual induced by the approximate PDE solution can maintain a smooth profile when it is being minimized.
no code implementations • 15 May 2023 • Li Zeng, Xiaoliang Wan, Tao Zhou
In this paper, we develop an invertible mapping, called B-KRnet, on a bounded domain and apply it to density estimation/approximation for data or the solutions of PDEs such as the Fokker-Planck equation and the Keller-Segel equation.
no code implementations • 1 Mar 2023 • Yani Feng, Kejun Tang, Xiaoliang Wan, Qifeng Liao
We present a dimension-reduced KRnet map approach (DR-KRnet) for high-dimensional Bayesian inverse problems, which is based on an explicit construction of a map that pushes forward the prior measure to the posterior measure in the latent space.
no code implementations • 26 Oct 2022 • Li Zeng, Xiaoliang Wan, Tao Zhou
To this end, we represent the solution with an explicit PDF model induced by a flow-based deep generative model, simplified KRnet, which constructs a transport map from a simple distribution to the target distribution.
1 code implementation • 28 Dec 2021 • Kejun Tang, Xiaoliang Wan, Chao Yang
In this work we propose a deep adaptive sampling (DAS) method for solving partial differential equations (PDEs), where deep neural networks are utilized to approximate the solutions of PDEs and deep generative models are employed to generate new collocation points that refine the training set.
no code implementations • 26 May 2021 • Xiaoliang Wan, Kejun Tang
In the augmented KRnet, a fully nonlinear update is achieved in two iterations.
no code implementations • 20 Mar 2021 • Kejun Tang, Xiaoliang Wan, Qifeng Liao
In this paper we present an adaptive deep density approximation strategy based on KRnet (ADDA-KR) for solving the steady-state Fokker-Planck (F-P) equations.
no code implementations • 29 Jun 2020 • Xiaoliang Wan, Shuangqing Wei
VAE is used as a dimension reduction technique to capture the latent space, and KRnet is used to model the distribution of the latent variable.
no code implementations • 23 Jan 2019 • Xiaoliang Wan, Shuangqing Wei
An effective technique to reduce the variance reduction is importance sampling, where we employ the generative model to estimate the distribution of the data from the reduced-order model and use it for the change of measure in the importance sampling estimator.