Search Results for author: D. Belomestny

Found 4 papers, 2 papers with code

UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms

1 code implementation5 May 2021 D. Belomestny, I. Levin, E. Moulines, A. Naumov, S. Samsonov, V. Zorina

Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL).

reinforcement-learning Reinforcement Learning (RL)

Variance reduction for Markov chains with application to MCMC

1 code implementation8 Oct 2019 D. Belomestny, L. Iosipoi, E. Moulines, A. Naumov, S. Samsonov

In this paper we propose a novel variance reduction approach for additive functionals of Markov chains based on minimization of an estimate for the asymptotic variance of these functionals over suitable classes of control variates.

Variance reduction for additive functional of Markov chains via martingale representations

no code implementations18 Mar 2019 D. Belomestny, E. Moulines, S. Samsonov

In this paper we propose an efficient variance reduction approach for additive functionals of Markov chains relying on a novel discrete time martingale representation.

Empirical Variance Minimization with Applications in Variance Reduction and Optimal Control

no code implementations13 Dec 2017 D. Belomestny, L. Iosipoi, Q. Paris, N. Zhivotovskiy

We study the problem of empirical minimization for variance-type functionals over functional classes.

Cannot find the paper you are looking for? You can Submit a new open access paper.