no code implementations • 23 Dec 2014 • Christa Cuchiero, Irene Klein, Josef Teichmann
In the context of large financial markets we formulate the notion of \emph{no asymptotic free lunch with vanishing risk} (NAFLVR), under which we can prove a version of the fundamental theorem of asset pricing (FTAP) in markets with an (even uncountably) infinite number of assets, as it is for instance the case in bond markets.
3 code implementations • 8 Feb 2018 • Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood
We present a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, market impact, liquidity constraints or risk limits using modern deep reinforcement machine learning methods.
Computational Finance Numerical Analysis Optimization and Control Probability Risk Management 91G60, 65K99
1 code implementation • 7 Nov 2019 • Jakob Heiss, Josef Teichmann, Hanna Wutte
In this paper, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained.
no code implementations • 27 Apr 2020 • Calypso Herrera, Florian Krach, Josef Teichmann
The Lipschitz constant is an important quantity that arises in analysing the convergence of gradient-based optimization methods.
1 code implementation • 28 Apr 2020 • Calypso Herrera, Florian Krach, Anastasis Kratsios, Pierre Ruyssen, Josef Teichmann
The robust PCA of covariance matrices plays an essential role when isolating key explanatory features.
1 code implementation • 5 May 2020 • Christa Cuchiero, Wahid Khosrawi, Josef Teichmann
We propose a fully data-driven approach to calibrate local stochastic volatility (LSV) models, circumventing in particular the ad hoc interpolation of the volatility surface.
2 code implementations • ICLR 2021 • Calypso Herrera, Florian Krach, Josef Teichmann
We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process.
no code implementations • 16 Jun 2020 • Matteo Gambara, Josef Teichmann
Consistent Recalibration models (CRC) have been introduced to capture in necessary generality the dynamic features of term structures of derivatives' prices.
no code implementations • 24 Jun 2020 • Paul Friedrich, Josef Teichmann
The Kyle model describes how an equilibrium of order sizes and security prices naturally arises between a trader with insider information and the price providing market maker as they interact through a series of auctions.
Computational Finance Trading and Market Microstructure
no code implementations • 10 Sep 2020 • Thomas Krabichler, Josef Teichmann
To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.
no code implementations • 17 Sep 2020 • Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann
A new explanation of geometric nature of the reservoir computing phenomenon is presented.
no code implementations • 1 Jan 2021 • Jakob Heiss, Alexis Stockinger, Josef Teichmann
We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network.
no code implementations • 3 Feb 2021 • Nicolas Curin, Michael Kettler, Xi Kleisinger-Yu, Vlatka Komaric, Thomas Krabichler, Josef Teichmann, Hanna Wutte
To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.
no code implementations • 3 Feb 2021 • Blanka Horvath, Josef Teichmann, Zan Zuric
We investigate the performance of the Deep Hedging framework under training paths beyond the (finite dimensional) Markovian setup.
1 code implementation • 26 Feb 2021 • Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann
To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data.
2 code implementations • 28 Apr 2021 • Calypso Herrera, Florian Krach, Pierre Ruyssen, Josef Teichmann
This paper presents the benefits of using randomized neural networks instead of standard basis functions or deep neural networks to approximate the solutions of optimal stopping problems.
no code implementations • NeurIPS Workshop DLDE 2021 • Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann
We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension.
1 code implementation • 31 Dec 2021 • Jakob Heiss, Josef Teichmann, Hanna Wutte
In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs).
no code implementations • 2 Jan 2022 • Enea Monzio Compagnoni, Anna Scampicchio, Luca Biggio, Antonio Orvieto, Thomas Hofmann, Josef Teichmann
Many finance, physics, and engineering phenomena are modeled by continuous-time dynamical systems driven by highly irregular (stochastic) inputs.
no code implementations • 7 Jan 2022 • Erdinc Akyildirim, Matteo Gambara, Josef Teichmann, Syang Zhou
In this case, we are able to identify pump and dump attempts organized on social networks with F1 scores up to 88% by means of our unsupervised learning algorithm, thus achieving results that are close to the state-of-the-art in the field based on supervised learning.
1 code implementation • 28 Jun 2022 • Florian Krach, Marc Nübel, Josef Teichmann
This paper studies the problem of forecasting general stochastic processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework \citep{herrera2021neural}.
no code implementations • 28 Nov 2022 • David Itkin, Benedikt Koch, Martin Larsson, Josef Teichmann
We consider an asymptotic robust growth problem under model uncertainty and in the presence of (non-Markovian) stochastic covariance.
no code implementations • 20 Mar 2023 • Jakob Heiss, Josef Teichmann, Hanna Wutte
Randomized neural networks (randomized NNs), where only the terminal layer's weights are optimized constitute a powerful model class to reduce computational time in training the neural network model.
1 code implementation • 5 Jun 2023 • Christa Cuchiero, Philipp Schmocker, Josef Teichmann
This then applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks.
1 code implementation • 24 Jul 2023 • William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann
The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations.
1 code implementation • 27 Jul 2023 • Josef Teichmann, Hanna Wutte
These approaches prove to be successful for pricing the passport option in one-dimensional and multi-dimensional uncorrelated BS markets.
no code implementations • 27 Dec 2023 • Erdinc Akyildirim, Matteo Gambara, Josef Teichmann, Syang Zhou
We present convincing empirical results on the application of Randomized Signature Methods for non-linear, non-parametric drift estimation for a multi-variate financial market.
1 code implementation • 22 Mar 2024 • Florian Krach, Josef Teichmann, Hanna Wutte
Lastly, we uncover that our generative approach for learning optimal, (non-) robust investments under trading costs generates universally applicable alternatives to well known asymptotic strategies of idealized settings.