Search Results for author: Josef Teichmann

Found 28 papers, 13 papers with code

Robust Utility Optimization via a GAN Approach

1 code implementation22 Mar 2024 Florian Krach, Josef Teichmann, Hanna Wutte

Lastly, we uncover that our generative approach for learning optimal, (non-) robust investments under trading costs generates universally applicable alternatives to well known asymptotic strategies of idealized settings.

Generative Adversarial Network

Randomized Signature Methods in Optimal Portfolio Selection

no code implementations27 Dec 2023 Erdinc Akyildirim, Matteo Gambara, Josef Teichmann, Syang Zhou

We present convincing empirical results on the application of Randomized Signature Methods for non-linear, non-parametric drift estimation for a multi-variate financial market.

Portfolio Optimization

Machine Learning-powered Pricing of the Multidimensional Passport Option

1 code implementation27 Jul 2023 Josef Teichmann, Hanna Wutte

These approaches prove to be successful for pricing the passport option in one-dimensional and multi-dimensional uncorrelated BS markets.

Board Games

Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework

1 code implementation24 Jul 2023 William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann

The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations.

Time Series

Global universal approximation of functional input maps on weighted spaces

1 code implementation5 Jun 2023 Christa Cuchiero, Philipp Schmocker, Josef Teichmann

This then applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks.

Gaussian Processes regression +1

How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer

no code implementations20 Mar 2023 Jakob Heiss, Josef Teichmann, Hanna Wutte

Randomized neural networks (randomized NNs), where only the terminal layer's weights are optimized constitute a powerful model class to reduce computational time in training the neural network model.

regression

Ergodic robust maximization of asymptotic growth under stochastic volatility

no code implementations28 Nov 2022 David Itkin, Benedikt Koch, Martin Larsson, Josef Teichmann

We consider an asymptotic robust growth problem under model uncertainty and in the presence of (non-Markovian) stochastic covariance.

Optimal Estimation of Generic Dynamics by Path-Dependent Neural Jump ODEs

1 code implementation28 Jun 2022 Florian Krach, Marc Nübel, Josef Teichmann

This paper studies the problem of forecasting general stochastic processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework \citep{herrera2021neural}.

Time Series Time Series Analysis

Applications of Signature Methods to Market Anomaly Detection

no code implementations7 Jan 2022 Erdinc Akyildirim, Matteo Gambara, Josef Teichmann, Syang Zhou

In this case, we are able to identify pump and dump attempts organized on social networks with F1 scores up to 88% by means of our unsupervised learning algorithm, thus achieving results that are close to the state-of-the-art in the field based on supervised learning.

Anomaly Detection Time Series +1

On the effectiveness of Randomized Signatures as Reservoir for Learning Rough Dynamics

no code implementations2 Jan 2022 Enea Monzio Compagnoni, Anna Scampicchio, Luca Biggio, Antonio Orvieto, Thomas Hofmann, Josef Teichmann

Many finance, physics, and engineering phenomena are modeled by continuous-time dynamical systems driven by highly irregular (stochastic) inputs.

LEMMA Time Series +1

How Infinitely Wide Neural Networks Can Benefit from Multi-task Learning -- an Exact Macroscopic Characterization

1 code implementation31 Dec 2021 Jakob Heiss, Josef Teichmann, Hanna Wutte

In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs).

Gaussian Processes L2 Regularization +2

Expressive Power of Randomized Signature

no code implementations NeurIPS Workshop DLDE 2021 Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension.

LEMMA Transfer Learning

Optimal Stopping via Randomized Neural Networks

2 code implementations28 Apr 2021 Calypso Herrera, Florian Krach, Pierre Ruyssen, Josef Teichmann

This paper presents the benefits of using randomized neural networks instead of standard basis functions or deep neural networks to approximate the solutions of optimal stopping problems.

BIG-bench Machine Learning

NOMU: Neural Optimization-based Model Uncertainty

1 code implementation26 Feb 2021 Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann

To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data.

Bayesian Optimization regression

A deep learning model for gas storage optimization

no code implementations3 Feb 2021 Nicolas Curin, Michael Kettler, Xi Kleisinger-Yu, Vlatka Komaric, Thomas Krabichler, Josef Teichmann, Hanna Wutte

To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.

Management reinforcement-learning +1

Deep Hedging under Rough Volatility

no code implementations3 Feb 2021 Blanka Horvath, Josef Teichmann, Zan Zuric

We investigate the performance of the Deep Hedging framework under training paths beyond the (finite dimensional) Markovian setup.

Time Series Time Series Analysis

Reducing the number of neurons of Deep ReLU Networks based on the current theory of Regularization

no code implementations1 Jan 2021 Jakob Heiss, Alexis Stockinger, Josef Teichmann

We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network.

Discrete-time signatures and randomness in reservoir computing

no code implementations17 Sep 2020 Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

A new explanation of geometric nature of the reservoir computing phenomenon is presented.

Deep Replication of a Runoff Portfolio

no code implementations10 Sep 2020 Thomas Krabichler, Josef Teichmann

To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.

Decision Making Management

Deep Investing in Kyle's Single Period Model

no code implementations24 Jun 2020 Paul Friedrich, Josef Teichmann

The Kyle model describes how an equilibrium of order sizes and security prices naturally arises between a trader with insider information and the price providing market maker as they interact through a series of auctions.

Computational Finance Trading and Market Microstructure

Consistent Recalibration Models and Deep Calibration

no code implementations16 Jun 2020 Matteo Gambara, Josef Teichmann

Consistent Recalibration models (CRC) have been introduced to capture in necessary generality the dynamic features of term structures of derivatives' prices.

BIG-bench Machine Learning

Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering

2 code implementations ICLR 2021 Calypso Herrera, Florian Krach, Josef Teichmann

We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process.

Time Series Time Series Analysis

A generative adversarial network approach to calibration of local stochastic volatility models

1 code implementation5 May 2020 Christa Cuchiero, Wahid Khosrawi, Josef Teichmann

We propose a fully data-driven approach to calibrate local stochastic volatility (LSV) models, circumventing in particular the ad hoc interpolation of the volatility surface.

Generative Adversarial Network

Denise: Deep Robust Principal Component Analysis for Positive Semidefinite Matrices

1 code implementation28 Apr 2020 Calypso Herrera, Florian Krach, Anastasis Kratsios, Pierre Ruyssen, Josef Teichmann

The robust PCA of covariance matrices plays an essential role when isolating key explanatory features.

Local Lipschitz Bounds of Deep Neural Networks

no code implementations27 Apr 2020 Calypso Herrera, Florian Krach, Josef Teichmann

The Lipschitz constant is an important quantity that arises in analysing the convergence of gradient-based optimization methods.

How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer

1 code implementation7 Nov 2019 Jakob Heiss, Josef Teichmann, Hanna Wutte

In this paper, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained.

regression

Deep Hedging

3 code implementations8 Feb 2018 Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood

We present a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, market impact, liquidity constraints or risk limits using modern deep reinforcement machine learning methods.

Computational Finance Numerical Analysis Optimization and Control Probability Risk Management 91G60, 65K99

A new perspective on the fundamental theorem of asset pricing for large financial markets

no code implementations23 Dec 2014 Christa Cuchiero, Irene Klein, Josef Teichmann

In the context of large financial markets we formulate the notion of \emph{no asymptotic free lunch with vanishing risk} (NAFLVR), under which we can prove a version of the fundamental theorem of asset pricing (FTAP) in markets with an (even uncountably) infinite number of assets, as it is for instance the case in bond markets.

Cannot find the paper you are looking for? You can Submit a new open access paper.