Search Results for author: Justin Sirignano

Found 21 papers, 2 papers with code

Weak Convergence Analysis of Online Neural Actor-Critic Algorithms

no code implementations25 Mar 2024 Samuel Chun-Hei Lam, Justin Sirignano, Ziheng Wang

Then, using a Poisson equation, we prove that the fluctuations of the model updates around the limit distribution due to the randomly-arriving data samples vanish as the number of parameter updates $\rightarrow \infty$.

Kernel Limit of Recurrent Neural Networks Trained on Ergodic Data Sequences

1 code implementation28 Aug 2023 Samuel Chun-Hei Lam, Justin Sirignano, Konstantinos Spiliopoulos

Mathematical methods are developed to characterize the asymptotics of recurrent neural networks (RNN) as the number of hidden units, data samples in the sequence, hidden state updates, and training steps simultaneously grow to infinity.

Global Convergence of Deep Galerkin and PINNs Methods for Solving Partial Differential Equations

no code implementations10 May 2023 Deqing Jiang, Justin Sirignano, Samuel N. Cohen

In this paper, we prove global convergence for one of the commonly-used deep learning algorithms for solving PDEs, the Deep Galerkin Method (DGM).

Dynamic Deep Learning LES Closures: Online Optimization With Embedded DNS

no code implementations4 Mar 2023 Justin Sirignano, Jonathan F. MacArt

The deep learning closure model is dynamically trained during a large-eddy simulation (LES) calculation using embedded direct numerical simulation (DNS) data.

Deep Learning Closure Models for Large-Eddy Simulation of Flows around Bluff Bodies

no code implementations6 Aug 2022 Justin Sirignano, Jonathan F. MacArt

A deep learning (DL) closure model for large-eddy simulation (LES) is developed and evaluated for incompressible flows around a rectangular cylinder at moderate Reynolds numbers.

A Forward Propagation Algorithm for Online Optimization of Nonlinear Stochastic Differential Equations

no code implementations10 Jul 2022 Ziheng Wang, Justin Sirignano

We then re-write the algorithm using the PDE solution, which allows us to characterize the parameter evolution around the direction of steepest descent.

Neural Q-learning for solving PDEs

no code implementations31 Mar 2022 Samuel N. Cohen, Deqing Jiang, Justin Sirignano

We develop a new numerical method for solving elliptic-type PDEs by adapting the Q-learning algorithm in reinforcement learning.

Q-Learning

Continuous-time stochastic gradient descent for optimizing over the stationary distribution of stochastic differential equations

no code implementations14 Feb 2022 Ziheng Wang, Justin Sirignano

The gradient estimate is simultaneously updated using forward propagation of the SDE state derivatives, asymptotically converging to the direction of steepest descent.

PDE-constrained Models with Neural Network Terms: Optimization and Global Convergence

no code implementations18 May 2021 Justin Sirignano, Jonathan MacArt, Konstantinos Spiliopoulos

Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering.

Embedded training of neural-network sub-grid-scale turbulence models

no code implementations3 May 2021 Jonathan F. MacArt, Justin Sirignano, Jonathan B. Freund

The weights of a deep neural network model are optimized in conjunction with the governing flow equations to provide a model for sub-grid-scale stresses in a temporally developing plane turbulent jet at Reynolds number $Re_0=6\, 000$.

DPM: A deep learning PDE augmentation method (with application to large-eddy simulation)

no code implementations20 Nov 2019 Jonathan B. Freund, Jonathan F. MacArt, Justin Sirignano

A deep neural network is embedded in a partial differential equation (PDE) that expresses the known physics and learns to describe the corresponding unknown or unrepresented physics from the data.

Asymptotics of Reinforcement Learning with Neural Networks

no code implementations13 Nov 2019 Justin Sirignano, Konstantinos Spiliopoulos

In addition, we study the convergence of the limit differential equation to the stationary solution.

Q-Learning reinforcement-learning +1

Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum

no code implementations9 Jul 2019 Justin Sirignano, Konstantinos Spiliopoulos

We analyze single-layer neural networks with the Xavier initialization in the asymptotic regime of large numbers of hidden units and large numbers of stochastic gradient descent training steps.

Mean Field Analysis of Deep Neural Networks

no code implementations11 Mar 2019 Justin Sirignano, Konstantinos Spiliopoulos

The limit procedure is valid for any number of hidden layers and it naturally also describes the limiting behavior of the training loss.

valid

Mean Field Analysis of Neural Networks: A Central Limit Theorem

no code implementations28 Aug 2018 Justin Sirignano, Konstantinos Spiliopoulos

We rigorously prove a central limit theorem for neural network models with a single hidden layer.

Speech Recognition

Universal features of price formation in financial markets: perspectives from Deep Learning

no code implementations19 Mar 2018 Justin Sirignano, Rama Cont

The universal price formation model is shown to exhibit a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors.

Time Series Analysis

Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem

no code implementations11 Oct 2017 Justin Sirignano, Konstantinos Spiliopoulos

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance.

DGM: A deep learning algorithm for solving partial differential equations

8 code implementations24 Aug 2017 Justin Sirignano, Konstantinos Spiliopoulos

The algorithm is tested on a class of high-dimensional free boundary PDEs, which we are able to accurately solve in up to $200$ dimensions.

Stochastic Gradient Descent in Continuous Time

no code implementations17 Nov 2016 Justin Sirignano, Konstantinos Spiliopoulos

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance.

Deep Learning for Limit Order Books

no code implementations8 Jan 2016 Justin Sirignano

The spatial neural network outperforms other models such as the naive empirical model, logistic regression (with nonlinear features), and a standard neural network architecture.

Management regression

Cannot find the paper you are looking for? You can Submit a new open access paper.