Search Results for author: Philip Ball

Found 3 papers, 1 papers with code

Revisiting Design Choices in Offline Model Based Reinforcement Learning

no code implementations NeurIPS 2021 Cong Lu, Philip Ball, Jack Parker-Holder, Michael Osborne, S Roberts

Offline reinforcement learning enables agents to make use of large pre-collected datasets of environment transitions and learn control policies without the need for potentially expensive or unsafe online data collection.

Bayesian Optimization Model-based Reinforcement Learning +3

Ready Policy One: World Building Through Active Learning

no code implementations ICML 2020 Philip Ball, Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski, Stephen Roberts

Model-Based Reinforcement Learning (MBRL) offers a promising direction for sample efficient learning, often achieving state of the art results for continuous control tasks.

Active Learning Continuous Control +1

Cannot find the paper you are looking for? You can Submit a new open access paper.