Search Results for author: Josef Teichmann

Found 19 papers, 6 papers with code

Applications of Signature Methods to Market Anomaly Detection

no code implementations7 Jan 2022 Erdinc Akyildirim, Matteo Gambara, Josef Teichmann, Syang Zhou

In this case, we are able to identify pump and dump attempts organized on social networks with F1 scores up to 88% by means of our unsupervised learning algorithm, thus achieving results that are close to the state-of-the-art in the field based on supervised learning.

Anomaly Detection Time Series

Infinite width (finite depth) neural networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization

no code implementations31 Dec 2021 Jakob Heiss, Josef Teichmann, Hanna Wutte

We prove in this paper that optimizing wide ReLU neural networks (NNs) with at least one hidden layer using l2-regularization on the parameters enforces multi-task learning due to representation-learning - also in the limit of width to infinity.

Gaussian Processes L2 Regularization +2

Expressive Power of Randomized Signature

no code implementations NeurIPS Workshop DLDE 2021 Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension.

Transfer Learning

Optimal Stopping via Randomized Neural Networks

2 code implementations28 Apr 2021 Calypso Herrera, Florian Krack, Pierre Ruyssen, Josef Teichmann

This paper presents new machine learning approaches to approximate the solution of optimal stopping problems.

NOMU: Neural Optimization-based Model Uncertainty

1 code implementation26 Feb 2021 Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann

To address this, we introduce a new approach for capturing model uncertainty for NNs, which we call Neural Optimization-based Model Uncertainty (NOMU).

Deep Hedging under Rough Volatility

no code implementations3 Feb 2021 Blanka Horvath, Josef Teichmann, Zan Zuric

We investigate the performance of the Deep Hedging framework under training paths beyond the (finite dimensional) Markovian setup.

Time Series

A deep learning model for gas storage optimization

no code implementations3 Feb 2021 Nicolas Curin, Michael Kettler, Xi Kleisinger-Yu, Vlatka Komaric, Thomas Krabichler, Josef Teichmann, Hanna Wutte

To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.

Reducing the number of neurons of Deep ReLU Networks based on the current theory of Regularization

no code implementations1 Jan 2021 Jakob Heiss, Alexis Stockinger, Josef Teichmann

We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network.

Discrete-time signatures and randomness in reservoir computing

no code implementations17 Sep 2020 Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

A new explanation of geometric nature of the reservoir computing phenomenon is presented.

Deep Replication of a Runoff Portfolio

no code implementations10 Sep 2020 Thomas Krabichler, Josef Teichmann

To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.

Decision Making

Deep Investing in Kyle's Single Period Model

no code implementations24 Jun 2020 Paul Friedrich, Josef Teichmann

The Kyle model describes how an equilibrium of order sizes and security prices naturally arises between a trader with insider information and the price providing market maker as they interact through a series of auctions.

Computational Finance Trading and Market Microstructure

Consistent Recalibration Models and Deep Calibration

no code implementations16 Jun 2020 Matteo Gambara, Josef Teichmann

Consistent Recalibration models (CRC) have been introduced to capture in necessary generality the dynamic features of term structures of derivatives' prices.

Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering

2 code implementations ICLR 2021 Calypso Herrera, Florian Krach, Josef Teichmann

We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process.

Time Series

A generative adversarial network approach to calibration of local stochastic volatility models

1 code implementation5 May 2020 Christa Cuchiero, Wahid Khosrawi, Josef Teichmann

We propose a fully data-driven approach to calibrate local stochastic volatility (LSV) models, circumventing in particular the ad hoc interpolation of the volatility surface.

Denise: Deep Robust Principal Component Analysis for Positive Semidefinite Matrices

1 code implementation28 Apr 2020 Calypso Herrera, Florian Krach, Anastasis Kratsios, Pierre Ruyssen, Josef Teichmann

The robust PCA of covariance matrices plays an essential role when isolating key explanatory features.

Estimating Full Lipschitz Constants of Deep Neural Networks

no code implementations27 Apr 2020 Calypso Herrera, Florian Krach, Josef Teichmann

We estimate the Lipschitz constants of the gradient of a deep neural network and the network itself with respect to the full set of parameters.

How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer

no code implementations7 Nov 2019 Jakob Heiss, Josef Teichmann, Hanna Wutte

These observations motivate us to analyze properties of the neural networks found by gradient descent initialized close to zero, that is frequently employed to perform the training task.

Deep Hedging

2 code implementations8 Feb 2018 Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood

We present a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, market impact, liquidity constraints or risk limits using modern deep reinforcement machine learning methods.

Computational Finance Numerical Analysis Optimization and Control Probability Risk Management 91G60, 65K99

Cannot find the paper you are looking for? You can Submit a new open access paper.