Model-based Reinforcement Learning
112 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in Model-based Reinforcement Learning
Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance.
Designing effective model-based reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of model-generated data.
Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance.
We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting.
Finally, we assess the performance of the algorithm for learning motor controllers for a six legged autonomous underwater vehicle.
MBRL-Lib is designed as a platform for both researchers, to easily develop, debug and compare new algorithms, and non-expert user, to lower the entry-bar of deploying state-of-the-art algorithms.
High-dimensional observations are a major challenge in the application of model-based reinforcement learning (MBRL) to real-world environments.