Search Results for author: A. Max Reppen

Found 4 papers, 3 papers with code

Neural Optimal Stopping Boundary

no code implementations9 May 2022 A. Max Reppen, H. Mete Soner, Valentin Tissot-Daguette

A method based on deep artificial neural networks and empirical risk minimization is developed to calculate the boundary separating the stopping and continuation regions in optimal stopping.

Deep Stochastic Optimization in Finance

1 code implementation9 May 2022 A. Max Reppen, H. Mete Soner, Valentin Tissot-Daguette

This paper outlines, and through stylized examples evaluates a novel and highly effective computational technique in quantitative finance.

Stochastic Optimization

Deep Empirical Risk Minimization in finance: looking into the future

1 code implementation18 Nov 2020 A. Max Reppen, H. Mete Soner

Many modern computational approaches to classical problems in quantitative finance are formulated as empirical loss minimization (ERM), allowing direct applications of classical results from statistical machine learning.

Synthetic Data Generation

Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions

1 code implementation15 Jul 2020 Sinong Geng, Houssam Nassif, Carlos A. Manzanares, A. Max Reppen, Ronnie Sircar

We name our method PQR, as it sequentially estimates the Policy, the $Q$-function, and the Reward function by deep learning.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.