no code implementations • 4 Apr 2022 • Carmina Fjellström, Kaj Nyström
Stochastic gradient descent (SGD) is widely used in deep learning due to its computational efficiency, but a complete understanding of why SGD performs so well remains a major challenge.
no code implementations • 7 Oct 2021 • Kaj Nyström, Matias Vestberg
The Monge-Amp\`ere equation is a fully nonlinear partial differential equation (PDE) of fundamental importance in analysis, geometry and in the applied sciences.
2 code implementations • arXiv 2019 • Benny Avelin, Kaj Nyström
In this paper we prove that, in the deep limit, the stochastic gradient descent on a ResNet type deep neural network, where each layer shares the same weight matrix, converges to the stochastic gradient descent for a Neural ODE and that the corresponding value/loss functions converge.
1 code implementation • 31 Aug 2018 • Jens Berg, Kaj Nyström
A different approach is to measure the quantities of interest and use deep learning to reverse engineer the PDEs which are describing the physical process.
no code implementations • 27 Dec 2017 • Jens Berg, Kaj Nyström
In this paper we show how to augment classical methods for inverse problems with artificial neural networks.
no code implementations • 17 Nov 2017 • Jens Berg, Kaj Nyström
In this paper we use deep feedforward artificial neural networks to approximate solutions to partial differential equations in complex geometries.