Paper

MIME: Mutual Information Minimisation Exploration

We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn. We propose a counter-intuitive solution that we call Mutual Information Minimising Exploration (MIME) where an agent learns a latent representation of the environment without trying to predict the future states... (read more)

Results in Papers With Code
(↓ scroll down to see all results)