no code implementations • 25 Mar 2024 • Samuel Chun-Hei Lam, Justin Sirignano, Ziheng Wang
Then, using a Poisson equation, we prove that the fluctuations of the model updates around the limit distribution due to the randomly-arriving data samples vanish as the number of parameter updates $\rightarrow \infty$.
1 code implementation • 28 Aug 2023 • Samuel Chun-Hei Lam, Justin Sirignano, Konstantinos Spiliopoulos
Mathematical methods are developed to characterize the asymptotics of recurrent neural networks (RNN) as the number of hidden units, data samples in the sequence, hidden state updates, and training steps simultaneously grow to infinity.
no code implementations • 10 May 2023 • Deqing Jiang, Justin Sirignano, Samuel N. Cohen
In this paper, we prove global convergence for one of the commonly-used deep learning algorithms for solving PDEs, the Deep Galerkin Method (DGM).
no code implementations • 4 Mar 2023 • Justin Sirignano, Jonathan F. MacArt
The deep learning closure model is dynamically trained during a large-eddy simulation (LES) calculation using embedded direct numerical simulation (DNS) data.
no code implementations • 6 Aug 2022 • Justin Sirignano, Jonathan F. MacArt
A deep learning (DL) closure model for large-eddy simulation (LES) is developed and evaluated for incompressible flows around a rectangular cylinder at moderate Reynolds numbers.
no code implementations • 10 Jul 2022 • Ziheng Wang, Justin Sirignano
We then re-write the algorithm using the PDE solution, which allows us to characterize the parameter evolution around the direction of steepest descent.
no code implementations • 31 Mar 2022 • Samuel N. Cohen, Deqing Jiang, Justin Sirignano
We develop a new numerical method for solving elliptic-type PDEs by adapting the Q-learning algorithm in reinforcement learning.
no code implementations • 14 Feb 2022 • Ziheng Wang, Justin Sirignano
The gradient estimate is simultaneously updated using forward propagation of the SDE state derivatives, asymptotically converging to the direction of steepest descent.
no code implementations • 18 May 2021 • Justin Sirignano, Jonathan MacArt, Konstantinos Spiliopoulos
Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering.
no code implementations • 3 May 2021 • Jonathan F. MacArt, Justin Sirignano, Jonathan B. Freund
The weights of a deep neural network model are optimized in conjunction with the governing flow equations to provide a model for sub-grid-scale stresses in a temporally developing plane turbulent jet at Reynolds number $Re_0=6\, 000$.
no code implementations • 20 Nov 2019 • Jonathan B. Freund, Jonathan F. MacArt, Justin Sirignano
A deep neural network is embedded in a partial differential equation (PDE) that expresses the known physics and learns to describe the corresponding unknown or unrepresented physics from the data.
no code implementations • 13 Nov 2019 • Justin Sirignano, Konstantinos Spiliopoulos
In addition, we study the convergence of the limit differential equation to the stationary solution.
no code implementations • 9 Jul 2019 • Justin Sirignano, Konstantinos Spiliopoulos
We analyze single-layer neural networks with the Xavier initialization in the asymptotic regime of large numbers of hidden units and large numbers of stochastic gradient descent training steps.
no code implementations • 11 Mar 2019 • Justin Sirignano, Konstantinos Spiliopoulos
The limit procedure is valid for any number of hidden layers and it naturally also describes the limiting behavior of the training loss.
no code implementations • 28 Aug 2018 • Justin Sirignano, Konstantinos Spiliopoulos
We rigorously prove a central limit theorem for neural network models with a single hidden layer.
no code implementations • 19 Mar 2018 • Justin Sirignano, Rama Cont
The universal price formation model is shown to exhibit a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors.
no code implementations • 11 Oct 2017 • Justin Sirignano, Konstantinos Spiliopoulos
Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance.
8 code implementations • 24 Aug 2017 • Justin Sirignano, Konstantinos Spiliopoulos
The algorithm is tested on a class of high-dimensional free boundary PDEs, which we are able to accurately solve in up to $200$ dimensions.
no code implementations • 17 Nov 2016 • Justin Sirignano, Konstantinos Spiliopoulos
Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance.
no code implementations • 8 Jan 2016 • Justin Sirignano
The spatial neural network outperforms other models such as the naive empirical model, logistic regression (with nonlinear features), and a standard neural network architecture.