# Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions

We study the problem of online learning Markov Decision Processes (MDPs) when both the transition distributions and loss functions are chosen by an adversary. We present an algorithm that, under a mixing assumption, achieves $O(\sqrt{T\log|\Pi|}+\log|\Pi|)$ regret with respect to a comparison set of policies $\Pi$... (read more)

PDF Abstract

# Code Add Remove Mark official

No code implementations yet. Submit your code now