Continuous Control

377 papers with code • 73 benchmarks • 9 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Proximal Policy Optimization Algorithms

labmlai/annotated_deep_learning_paper_implementations 20 Jul 2017

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent.

Continuous control with deep reinforcement learning

ray-project/ray 9 Sep 2015

We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain.

Addressing Function Approximation Error in Actor-Critic Methods

sfujim/TD3 ICML 2018

In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies.

Simple random search provides a competitive approach to reinforcement learning

modestyachts/ARS 19 Mar 2018

A common belief in model-free reinforcement learning is that methods based on random search in the parameter space of policies exhibit significantly worse sample complexity than those that explore the space of actions.

Dream to Control: Learning Behaviors by Latent Imagination

danijar/dreamer ICLR 2020

Learned world models summarize an agent's experience to facilitate learning complex behaviors.

High-Dimensional Continuous Control Using Generalized Advantage Estimation

labmlai/annotated_deep_learning_paper_implementations 8 Jun 2015

Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks.

Benchmarking Deep Reinforcement Learning for Continuous Control

rllab/rllab 22 Apr 2016

Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning.

Conservative Q-Learning for Offline Reinforcement Learning

aviralkumar2907/CQL NeurIPS 2020

We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.