no code implementations • 15 Dec 2023 • Zhongshu Xu, Yuan Chen, Qifan Chen, Dongbin Xiu
We present a numerical method to learn an accurate predictive model for an unknown stochastic dynamical system from its trajectory data.
no code implementations • 20 Jul 2023 • Victor Churchill, Dongbin Xiu
Flow map learning (FML), in conjunction with deep neural networks (DNNs), has shown promises for data driven modeling of unknown dynamical systems.
no code implementations • 5 May 2023 • Yuan Chen, Dongbin Xiu
Termed stochastic flow map learning (sFML), the new framework is an extension of flow map learning (FML) that was developed for learning deterministic dynamical systems.
no code implementations • 3 Jun 2022 • Victor Churchill, Dongbin Xiu
Recent work has focused on data-driven learning of the evolution of unknown systems via deep neural networks (DNNs), with the goal of conducting long term prediction of the dynamics of the unknown system.
no code implementations • 12 May 2022 • Victor Churchill, Dongbin Xiu
A distinct feature of chaotic systems is that even the smallest perturbations will lead to large (albeit bounded) deviations in the solution trajectories.
no code implementations • 7 Mar 2022 • Victor Churchill, Steve Manns, Zhen Chen, Dongbin Xiu
In the proposed ensemble averaging method, multiple models are independently trained and model predictions are averaged at each time step.
no code implementations • 3 Feb 2022 • Xiaohan Fu, Weize Mao, Lo-Bin Chang, Dongbin Xiu
We present a data-driven numerical approach for modeling unknown dynamical systems with missing/hidden parameters.
no code implementations • 7 Jun 2021 • Zhen Chen, Victor Churchill, Kailiang Wu, Dongbin Xiu
Consequently, a trained DNN defines a predictive model for the underlying unknown PDE over structureless grids.
no code implementations • 2 Jun 2020 • Tong Qin, Zhen Chen, John Jakeman, Dongbin Xiu
To circumvent the difficulty presented by the non-autonomous nature of the system, our method transforms the solution state into piecewise integration of the system over a discrete set of time instances.
no code implementations • 20 Mar 2020 • Xiaohan Fu, Lo-Bin Chang, Dongbin Xiu
We then use a set of numerical examples to demonstrate the effectiveness of our method.
no code implementations • 5 Mar 2020 • Zhen Chen, Kailiang Wu, Dongbin Xiu
Various numerical examples are then presented to demonstrate the performance and properties of the numerical methods.
no code implementations • 11 Feb 2020 • Jun Hou, Tong Qin, Kailiang Wu, Dongbin Xiu
A novel correction algorithm is proposed for multi-class classification problems with corrupted training data.
no code implementations • 23 Jan 2020 • Zhen Chen, Dongbin Xiu
When an existing coarse model is not available, we present numerical strategies for fast creation of coarse models, to be used in conjunction with the generalized ResNet.
no code implementations • 15 Oct 2019 • Kailiang Wu, Dongbin Xiu
The evolution operator of the PDE, defined in infinite-dimensional space, maps the solution from a current time to a future time and completely characterizes the solution evolution of the underlying unknown PDE.
no code implementations • 24 May 2019 • Kailiang Wu, Tong Qin, Dongbin Xiu
We present a numerical approach for approximating unknown Hamiltonian systems using observation data.
no code implementations • 13 Nov 2018 • Tong Qin, Kailiang Wu, Dongbin Xiu
We demonstrate that the ResNet block can be considered as a one-step method that is exact in temporal integration.
no code implementations • 24 Sep 2018 • Kailiang Wu, Dongbin Xiu
We present effective numerical algorithms for locally recovering unknown governing differential equations from measurement data.
no code implementations • 22 Aug 2018 • Kailiang Wu, Dongbin Xiu
We present an explicit construction for feedforward neural network (FNN), which provides a piecewise constant approximation for multivariate functions.
1 code implementation • 22 May 2018 • Tong Qin, Ling Zhou, Dongbin Xiu
For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space.