Reinforcement learning with world model

30 Aug 2019  ·  Jingbin Liu, Xinyang Gu, Shuai Liu ·

Nowadays, model-free reinforcement learning algorithms have achieved remarkable performance on many decision making and control tasks, but high sample complexity and low sample efficiency still hinder the wide use of model-free reinforcement learning algorithms. In this paper, we argue that if we intend to design an intelligent agent that learns fast and transfers well, the agent must be able to reflect key elements of intelligence, like intuition, Memory, PredictionandCuriosity. We propose an agent framework that integrates off-policy reinforcement learning with world model learning, so as to embody the important features of intelligence in our algorithm design. We adopt the state-of-art model-free reinforcement learning algorithm, Soft Actor-Critic, as the agent intuition, and world model learning through RNN to endow the agent with memory, curiosity, and the ability to predict. We show that these ideas can work collaboratively with each other and our agent (RMC) can give new state-of-art results while maintaining sample efficiency and training stability. Moreover, our agent framework can be easily extended from MDP to POMDP problems without performance loss.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here