We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning.
CONTINUOUS CONTROL HIERARCHICAL REINFORCEMENT LEARNING REPRESENTATION LEARNING
Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity.
When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
Combining parameter noise with traditional RL methods allows to combine the best of both worlds.
Imitation Learning (IL) methods seek to match the behavior of an agent with that of an expert.
In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment.
Extracting and predicting object structure and dynamics from videos without supervision is a major challenge in machine learning.
ACTION RECOGNITION CONTINUOUS CONTROL OBJECT TRACKING VIDEO PREDICTION
In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines.
Ranked #1 on
OpenAI Gym
on HalfCheetah-v2
A platform for Applied Reinforcement Learning (Applied RL)
In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature.