Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction

In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels. Dreamer is a sample- and cost-efficient solution to robot learning, as it is used to train latent state-space models based on a variational autoencoder and to conduct policy optimization by latent trajectory imagination... (read more)

Results in Papers With Code
(↓ scroll down to see all results)