Search Results for author: Vicenç Rúbies Royo

Found 1 papers, 0 papers with code

A Multi-Armed Bandit Approach for Online Expert Selection in Markov Decision Processes

no code implementations18 Jul 2017 Eric Mazumdar, Roy Dong, Vicenç Rúbies Royo, Claire Tomlin, S. Shankar Sastry

We formulate a multi-armed bandit (MAB) approach to choosing expert policies online in Markov decision processes (MDPs).

Systems and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.