no code implementations • 30 Oct 2021 • Jonas Baggenstos, Diyora Salimova
In this article we show that ResNets are able to approximate solutions of Kolmogorov partial differential equations (PDEs) with constant diffusion and possibly nonlinear drift coefficients without suffering the curse of dimensionality, which is to say the number of parameters of the approximating ResNets grows at most polynomially in the reciprocal of the approximation accuracy $\varepsilon > 0$ and the dimension of the considered PDE $d\in\mathbb{N}$.
no code implementations • 3 Jul 2020 • Aritz Bercher, Lukas Gonon, Arnulf Jentzen, Diyora Salimova
In applications one is often not only interested in the size of the error with respect to the objective function but also in the size of the error with respect to a test function which is possibly different from the objective function.
no code implementations • 3 Jun 2020 • Fabian Hornung, Arnulf Jentzen, Diyora Salimova
Each of these results establishes that DNNs overcome the curse of dimensionality in approximating suitable PDE solutions at a fixed time point $T>0$ and on a compact cube $[a, b]^d$ in space but none of these results provides an answer to the question whether the entire PDE solution on $[0, T]\times [a, b]^d$ can be approximated by DNNs without the curse of dimensionality.
1 code implementation • 28 Aug 2019 • Philipp Grohs, Arnulf Jentzen, Diyora Salimova
One key argument in most of these results is, first, to use a Monte Carlo approximation scheme which can approximate the solution of the PDE under consideration at a fixed space-time point without the curse of dimensionality and, thereafter, to prove that DNNs are flexible enough to mimic the behaviour of the used approximation scheme.
no code implementations • 19 Sep 2018 • Arnulf Jentzen, Diyora Salimova, Timo Welti
These numerical simulations indicate that DNNs seem to possess the fundamental flexibility to overcome the curse of dimensionality in the sense that the number of real parameters used to describe the DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy $ \varepsilon > 0 $ and the dimension $ d \in \mathbb{N}$ of the function which the DNN aims to approximate in such computational problems.