Search Results for author: Kaj Nyström

Found 6 papers, 2 papers with code

Neural ODEs as the Deep Limit of ResNets with constant weights

2 code implementations arXiv 2019 Benny Avelin, Kaj Nyström

In this paper we prove that, in the deep limit, the stochastic gradient descent on a ResNet type deep neural network, where each layer shares the same weight matrix, converges to the stochastic gradient descent for a Neural ODE and that the corresponding value/loss functions converge.

Data-driven discovery of PDEs in complex datasets

1 code implementation31 Aug 2018 Jens Berg, Kaj Nyström

A different approach is to measure the quantities of interest and use deep learning to reverse engineer the PDEs which are describing the physical process.

Model Selection

Neural network augmented inverse problems for PDEs

no code implementations27 Dec 2017 Jens Berg, Kaj Nyström

In this paper we show how to augment classical methods for inverse problems with artificial neural networks.

A unified deep artificial neural network approach to partial differential equations in complex geometries

no code implementations17 Nov 2017 Jens Berg, Kaj Nyström

In this paper we use deep feedforward artificial neural networks to approximate solutions to partial differential equations in complex geometries.

Solving the Dirichlet problem for the Monge-Ampère equation using neural networks

no code implementations7 Oct 2021 Kaj Nyström, Matias Vestberg

The Monge-Amp\`ere equation is a fully nonlinear partial differential equation (PDE) of fundamental importance in analysis, geometry and in the applied sciences.

Deep learning, stochastic gradient descent and diffusion maps

no code implementations4 Apr 2022 Carmina Fjellström, Kaj Nyström

Stochastic gradient descent (SGD) is widely used in deep learning due to its computational efficiency, but a complete understanding of why SGD performs so well remains a major challenge.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.