You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • NeurIPS Workshop DLDE 2021 • Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension.

no code implementations • 29 Jul 2021 • Francesca Biagini, Lukas Gonon, Thomas Reitsam

First we prove that the $\alpha$-quantile hedging price converges to the superhedging price at time $0$ for $\alpha$ tending to $1$, and show that the $\alpha$-quantile hedging price can be approximated by a neural network-based price.

no code implementations • 14 Jun 2021 • Lukas Gonon

We derive bounds for the prediction error of random neural networks for learning sufficiently non-degenerate Black-Scholes type models.

no code implementations • 23 Feb 2021 • Lukas Gonon, Christoph Schwab

Deep neural networks (DNNs) with ReLU activation function are proved to be able to express viscosity solutions of linear partial integrodifferental equations (PIDEs) on state spaces of possibly high dimension $d$.

Numerical Analysis Numerical Analysis Probability

no code implementations • 22 Oct 2020 • Lukas Gonon, Juan-Pablo Ortega

Echo state networks (ESNs) have been recently proved to be universal approximants for input/output systems with respect to various $L ^p$-type criteria.

no code implementations • 17 Sep 2020 • Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

A new explanation of geometric nature of the reservoir computing phenomenon is presented.

no code implementations • 3 Jul 2020 • Aritz Bercher, Lukas Gonon, Arnulf Jentzen, Diyora Salimova

In applications one is often not only interested in the size of the error with respect to the objective function but also in the size of the error with respect to a test function which is possibly different from the objective function.

no code implementations • 22 Apr 2020 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

The notion of memory capacity, originally introduced for echo state and linear networks with independent inputs, is generalized to nonlinear recurrent networks with stationary but dependent inputs.

no code implementations • 14 Feb 2020 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights.

no code implementations • 20 Nov 2019 • Lukas Gonon, Philipp Grohs, Arnulf Jentzen, David Kofler, David Šiška

These mathematical results from the scientific literature prove in part that algorithms based on ANNs are capable of overcoming the curse of dimensionality in the numerical approximation of high-dimensional PDEs.

no code implementations • 30 Oct 2019 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

We analyze the practices of reservoir computing in the framework of statistical learning theory.

no code implementations • 7 Jul 2018 • Lukas Gonon, Juan-Pablo Ortega

The universal approximation properties with respect to $L ^p $-type criteria of three important families of reservoir computers with stochastic discrete-time semi-infinite inputs is shown.

2 code implementations • 8 Feb 2018 • Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood

We present a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, market impact, liquidity constraints or risk limits using modern deep reinforcement machine learning methods.

Computational Finance Numerical Analysis Optimization and Control Probability Risk Management 91G60, 65K99

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.