D4RL
22 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in D4RL
Libraries
Use these libraries to find D4RL models and implementationsMost implemented papers
Offline Reinforcement Learning with Implicit Q-Learning
The main insight in our work is that, instead of evaluating unseen actions from the latest policy, we can approximate the policy improvement step implicitly by treating the state value function as a random variable, with randomness determined by the action (while still integrating over the dynamics to avoid excessive optimism), and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state.
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
In this work, we introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL.
Implicit Behavioral Cloning
We find that across a wide range of robot policy learning scenarios, treating supervised policy learning with an implicit model generally performs better, on average, than commonly used explicit models.
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem.
d3rlpy: An Offline Deep Reinforcement Learning Library
In this paper, we introduce d3rlpy, an open-sourced offline deep reinforcement learning (RL) library for Python.
Online Decision Transformer
Recent work has shown that offline reinforcement learning (RL) can be formulated as a sequence modeling problem (Chen et al., 2021; Janner et al., 2021) and solved via approaches similar to large-scale language modeling.
CORL: Research-oriented Deep Offline Reinforcement Learning Library
CORL is an open-source library that provides single-file implementations of Deep Offline Reinforcement Learning algorithms.
Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief
To make practical, we further devise an offline RL algorithm to approximately find the solution.
Offline RL Without Off-Policy Evaluation
In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.
Conservative Offline Distributional Reinforcement Learning
We prove that CODAC learns a conservative return distribution -- in particular, for finite MDPs, CODAC converges to an uniform lower bound on the quantiles of the return distribution; our proof relies on a novel analysis of the distributional Bellman operator.