2 code implementations • 28 Apr 2021 • Calypso Herrera, Florian Krach, Pierre Ruyssen, Josef Teichmann
This paper presents the benefits of using randomized neural networks instead of standard basis functions or deep neural networks to approximate the solutions of optimal stopping problems.
2 code implementations • ICLR 2021 • Calypso Herrera, Florian Krach, Josef Teichmann
We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process.
1 code implementation • 28 Apr 2020 • Calypso Herrera, Florian Krach, Anastasis Kratsios, Pierre Ruyssen, Josef Teichmann
The robust PCA of covariance matrices plays an essential role when isolating key explanatory features.
no code implementations • 27 Apr 2020 • Calypso Herrera, Florian Krach, Josef Teichmann
The Lipschitz constant is an important quantity that arises in analysing the convergence of gradient-based optimization methods.