1 code implementation • 13 Dec 2023 • Ariel Neufeld, Philipp Schmocker
In this paper, we study random neural networks which are single-hidden-layer feedforward neural networks whose weights and biases are randomly initialized.
1 code implementation • 19 Jun 2023 • Ariel Neufeld, Julian Sester
In this paper we demonstrate both theoretically as well as numerically that neural networks can detect model-free static arbitrage opportunities whenever the market admits some.
1 code implementation • 24 Oct 2022 • Dong-Young Lim, Ariel Neufeld, Sotirios Sabanis, Ying Zhang
We introduce a new Langevin dynamics based algorithm, called e-TH$\varepsilon$O POULA, to solve optimization problems with discontinuous stochastic gradients which naturally appear in real-world applications such as quantile estimation, vector quantization, CVaR minimization, and regularized optimization problems involving ReLU neural networks.
1 code implementation • 30 Sep 2022 • Ariel Neufeld, Julian Sester
We present a novel $Q$-learning algorithm to solve distributionally robust Markov decision problems, where the corresponding ambiguity set of transition probabilities for the underlying Markov decision process is a Wasserstein ball around a (possibly estimated) reference measure.
no code implementations • 21 Sep 2022 • Ariel Neufeld, Philipp Schmocker
In this paper, we extend the Wiener-Ito chaos decomposition to the class of diffusion processes, whose drift and diffusion coefficient are of linear growth.
1 code implementation • 13 Jun 2022 • Ariel Neufeld, Julian Sester, Mario Šikić
We introduce a general framework for Markov decision problems under model uncertainty in a discrete-time infinite horizon setting.
1 code implementation • 7 Apr 2022 • Shunan Sheng, Qikun Xiang, Ido Nevat, Ariel Neufeld
Firstly, we develop algorithms to perform approximate Likelihood Ratio Tests on the time-series observations, compressing them to a single bit for both point sensors and integral sensors.
1 code implementation • 3 Apr 2022 • Jonathan Ansari, Eva Lütkebohmert, Ariel Neufeld, Julian Sester
We show how inter-asset dependence information derived from market prices of options can lead to improved model-free price bounds for multi-asset derivatives.
1 code implementation • 7 Mar 2022 • Ariel Neufeld, Julian Sester, Daiying Yin
We present an approach, based on deep neural networks, that allows identifying robust statistical arbitrage strategies in financial markets.
1 code implementation • 19 Jul 2021 • Dong-Young Lim, Ariel Neufeld, Sotirios Sabanis, Ying Zhang
To illustrate the applicability of the main results, we consider an example from transfer learning with ReLU neural networks, which represents a key paradigm in machine learning.
1 code implementation • 21 Mar 2021 • Ariel Neufeld, Julian Sester
We introduce a novel and highly tractable supervised learning approach based on neural networks that can be applied for the computation of model-free price bounds of, potentially high-dimensional, financial derivatives and for the determination of optimal hedging strategies attaining these bounds.
no code implementations • 4 Feb 2021 • Ariel Neufeld, Julian Sester
its marginals was recently established in Backhoff-Veraguas and Pammer [2] and Wiesel [21].
Probability Optimization and Control Mathematical Finance
1 code implementation • 4 Jan 2021 • Ariel Neufeld, Julian Sester
In this paper we extend discrete time semi-static trading strategies by also allowing for dynamic trading in a finite amount of options, and we study the consequences for the model-independent super-replication prices of exotic derivatives.
no code implementations • 2 Dec 2020 • Christian Beck, Sebastian Becker, Patrick Cheridito, Arnulf Jentzen, Ariel Neufeld
In this article we introduce and study a deep learning based approximation algorithm for solutions of stochastic partial differential equations (SPDEs).
no code implementations • 16 Jul 2020 • Daniel Bartl, Michael Kupper, Ariel Neufeld
In this paper we present a duality theory for the robust utility maximisation problem in continuous time for utility functions defined on the positive real axis.
1 code implementation • 25 Jun 2020 • Ariel Neufeld, Antonis Papapantoleon, Qikun Xiang
We consider derivatives written on multiple underlyings in a one-period financial market, and we are interested in the computation of model-free upper and lower bounds for their arbitrage-free prices.
Optimization and Control Probability Computational Finance Mathematical Finance Pricing of Securities
3 code implementations • 21 Apr 2020 • Pushpendu Ghosh, Ariel Neufeld, Jajati Keshari Sahoo
Hence we outperform the single-feature setting in Fischer & Krauss (2018) and Krauss et al. (2017) consisting only of the daily returns with respect to the closing prices, having corresponding daily returns of 0. 41% and of 0. 39% with respect to LSTM and random forests, respectively.
Ranked #1 on Stock Market Prediction on S&P 500
no code implementations • 16 Jan 2020 • Philipp Harms, Chong Liu, Ariel Neufeld
In this paper we study arbitrage theory of financial markets in the absence of a num\'eraire both in discrete and continuous time.
no code implementations • 8 Jul 2019 • Christian Beck, Sebastian Becker, Patrick Cheridito, Arnulf Jentzen, Ariel Neufeld
In this paper we introduce a numerical method for nonlinear parabolic PDEs that combines operator splitting with deep learning.
no code implementations • 17 Jan 2019 • Dominik Alfke, Weston Baines, Jan Blechschmidt, Mauricio J. del Razo Sarmina, Amnon Drory, Dennis Elbrächter, Nando Farchmin, Matteo Gambara, Silke Glas, Philipp Grohs, Peter Hinz, Danijel Kivaranovic, Christian Kümmerle, Gitta Kutyniok, Sebastian Lunz, Jan Macdonald, Ryan Malthaner, Gregory Naisat, Ariel Neufeld, Philipp Christian Petersen, Rafael Reisenhofer, Jun-Da Sheng, Laura Thesing, Philipp Trunschke, Johannes von Lindheim, David Weber, Melanie Weber
We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results.
no code implementations • 29 Jan 2018 • Arnulf Jentzen, Benno Kuckuck, Ariel Neufeld, Philippe von Wurstemberger
Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine learning applications.
Numerical Analysis Probability