Search Results for author: Sergey Levine

Found 362 papers, 172 papers with code

Global Decision-Making via Local Economic Transactions

no code implementations ICML 2020 Michael Chang, Sid Kaushik, S. Matthew Weinberg, Sergey Levine, Thomas Griffiths

This paper seeks to establish a mechanism for directing a collection of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems with a central global objective.

Decision Making

RvS: What is Essential for Offline RL via Supervised Learning?

1 code implementation20 Dec 2021 Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine

Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.

Offline RL

Autonomous Reinforcement Learning: Formalism and Benchmarking

no code implementations17 Dec 2021 Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

In this paper, we aim to address this discrepancy by laying out a framework for Autonomous Reinforcement Learning (ARL): reinforcement learning where the agent not only learns through its own experience, but also contends with lack of human supervision to reset between trials.

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

no code implementations9 Dec 2021 Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

In this paper, we discuss how the implicit regularization effect of SGD seen in supervised learning could in fact be harmful in the offline deep RL setting, leading to poor generalization and degenerate feature representations.

Atari Games Offline RL

Extending the WILDS Benchmark for Unsupervised Adaptation

no code implementations9 Dec 2021 Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang

In this work, we present the WILDS 2. 0 update, which extends 8 of the 10 datasets in the WILDS benchmark of distribution shifts to include curated unlabeled data that would be realistically obtainable in deployment.

CoMPS: Continual Meta Policy Search

no code implementations8 Dec 2021 Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine

Beyond simply transferring past experience to new tasks, our goal is to devise continual reinforcement learning algorithms that learn to learn, using their experience on previous tasks to learn new tasks more quickly.

Continual Learning Continuous Control +3

Information is Power: Intrinsic Control via Information Capture

no code implementations NeurIPS 2021 Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D. Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine

We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.

Bayesian Adaptation for Covariate Shift

no code implementations NeurIPS 2021 Aurick Zhou, Sergey Levine

When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates. While improving the robustness of neural networks is one promising approach to mitigate this issue, an appealing alternate to robustifying networks against all possible test-time shifts is to instead directly adapt them to unlabeled inputs from the particular distribution shift we encounter at test time. However, this poses a challenging question: in the standard Bayesian model for supervised learning, unlabeled inputs are conditionally independent of model parameters when the labels are unobserved, so what can unlabeled data tell us about the model parameters at test-time?

Domain Adaptation Image Classification

TRAIL: Near-Optimal Imitation Learning with Suboptimal Data

1 code implementation27 Oct 2021 Mengjiao Yang, Sergey Levine, Ofir Nachum

In this work, we answer this question affirmatively and present training objectives that use offline datasets to learn a factored transition model whose structure enables the extraction of a latent action space.

Imitation Learning

Understanding the World Through Action

1 code implementation24 Oct 2021 Sergey Levine

The recent history of machine learning research has taught us that machine learning methods can be most effective when they are provided with very large, high-capacity models, and trained on very large and diverse datasets.

C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks

1 code implementation22 Oct 2021 Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez

Goal-conditioned reinforcement learning (RL) can solve tasks in a wide range of domains, including navigation and manipulation, but learning to reach distant goals remains a central challenge to the field.

Data-Driven Offline Optimization For Architecting Hardware Accelerators

no code implementations20 Oct 2021 Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

An alternative paradigm is to use a "data-driven", offline approach that utilizes logged simulation data, to architect hardware accelerators, without needing any form of simulations.

MEMO: Test Time Robustness via Adaptation and Augmentation

no code implementations18 Oct 2021 Marvin Zhang, Sergey Levine, Chelsea Finn

We study the problem of test time robustification, i. e., using the test input to improve model robustness.

Offline Reinforcement Learning with Implicit Q-Learning

5 code implementations12 Oct 2021 Ilya Kostrikov, Ashvin Nair, Sergey Levine

The main insight in our work is that, instead of evaluating unseen actions from the latest policy, we can approximate the policy improvement step implicitly by treating the state value function as a random variable, with randomness determined by the action (while still integrating over the dynamics to avoid excessive optimism), and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state.

Offline RL Q-Learning

Mismatched No More: Joint Model-Policy Optimization for Model-Based RL

1 code implementation6 Oct 2021 Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov

As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them.

Model-based Reinforcement Learning

The Information Geometry of Unsupervised Reinforcement Learning

1 code implementation6 Oct 2021 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

In this work, we show that unsupervised skill discovery algorithms based on mutual information maximization do not learn skills that are optimal for every possible reward function.

Contrastive Learning Representation Learning +1

Training on Test Data with Bayesian Adaptation for Covariate Shift

no code implementations27 Sep 2021 Aurick Zhou, Sergey Levine

When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.

Domain Adaptation Image Classification

A Workflow for Offline Model-Free Robotic Reinforcement Learning

no code implementations22 Sep 2021 Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine

To this end, we devise a set of metrics and conditions that can be tracked over the course of offline training, and can inform the practitioner about how the algorithm and model architecture should be adjusted to improve final performance.

Offline RL

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning

no code implementations NeurIPS 2021 Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn

We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation.

Offline RL

Robust Predictable Control

1 code implementation NeurIPS 2021 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Many of the challenges facing today's reinforcement learning (RL) algorithms, such as robustness, generalization, transfer, and computational efficiency are closely related to compression.

Decision Making

Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation

no code implementations28 Jul 2021 Charles Sun, Jędrzej Orbik, Coline Devin, Brian Yang, Abhishek Gupta, Glen Berseth, Sergey Levine

Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions.

Continual Learning

Autonomous Reinforcement Learning via Subgoal Curricula

no code implementations NeurIPS 2021 Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn

Reinforcement learning (RL) promises to enable autonomous acquisition of complex behaviors for diverse agents.

MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning

no code implementations15 Jul 2021 Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr Pong, Aurick Zhou, Justin Yu, Sergey Levine

In this work, we show that an uncertainty aware classifier can solve challenging reinforcement learning problems by both encouraging exploration and provided directed guidance towards positive outcomes.

Meta-Learning

Conservative Objective Models for Effective Offline Model-Based Optimization

1 code implementation14 Jul 2021 Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine

Computational design problems arise in a number of settings, from synthetic biology to computer architectures.

Explore and Control with Adversarial Surprise

1 code implementation ICML Workshop URL 2021 Arnaud Fickinger, Natasha Jaques, Samyak Parajuli, Michael Chang, Nicholas Rhinehart, Glen Berseth, Stuart Russell, Sergey Levine

Unsupervised reinforcement learning (RL) studies how to leverage environment statistics to learn useful behaviors without the cost of reward engineering.

Unsupervised Reinforcement Learning

Offline Meta-Reinforcement Learning with Online Self-Supervision

1 code implementation8 Jul 2021 Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine

Meta-reinforcement learning (RL) can meta-train policies that adapt to new tasks with orders of magnitude less data than standard RL, but meta-training itself is costly and time-consuming.

Meta Reinforcement Learning Offline RL

Pragmatic Image Compression for Human-in-the-Loop Decision-Making

1 code implementation NeurIPS 2021 Siddharth Reddy, Anca D. Dragan, Sergey Levine

Standard lossy image compression algorithms aim to preserve an image's appearance, while minimizing the number of bits needed to transmit it.

Car Racing Decision Making +1

Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment

no code implementations ICLR Workshop Learning_to_Learn 2021 Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths

Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.

Decision Making Policy Gradient Methods

Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots

no code implementations24 Jun 2021 Katie Kang, Gregory Kahn, Sergey Levine

In this work, we propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt).

Model-Based Reinforcement Learning via Latent-Space Collocation

1 code implementation24 Jun 2021 Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine

The resulting latent collocation method (LatCo) optimizes trajectories of latent states, which improves over previously proposed shooting methods for visual model-based RL on tasks with sparse rewards and long-term goals.

Model-based Reinforcement Learning

FitVid: Overfitting in Pixel-Level Video Prediction

1 code implementation24 Jun 2021 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Suraj Nair, Sergey Levine, Chelsea Finn, Dumitru Erhan

There is a growing body of evidence that underfitting on the training data is one of the primary causes for the low quality predictions.

Image Augmentation Video Prediction

Reinforcement Learning as One Big Sequence Modeling Problem

no code implementations ICML Workshop URL 2021 Michael Janner, Qiyang Li, Sergey Levine

However, we can also view RL as a sequence modeling problem, with the goal being to predict a sequence of actions that leads to a sequence of high rewards.

Imitation Learning Offline RL

Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments

no code implementations ICML Workshop URL 2021 Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine

We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.

Offline Reinforcement Learning as One Big Sequence Modeling Problem

1 code implementation NeurIPS 2021 Michael Janner, Qiyang Li, Sergey Levine

Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time.

Imitation Learning Offline RL

Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning

no code implementations2 Jun 2021 Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu

Learning to reach goal states and learning diverse skills through mutual information (MI) maximization have been proposed as principled frameworks for self-supervised reinforcement learning, allowing agents to acquire broadly applicable multitask policies with minimal reward engineering.

Representation Learning

What Can I Do Here? Learning New Skills by Imagining Visual Affordances

1 code implementation1 Jun 2021 Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine

In effect, prior data is used to learn what kinds of outcomes may be possible, such that when the robot encounters an unfamiliar setting, it can sample potential outcomes from its model, attempt to reach them, and thereby update both its skills and its outcome model.

DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies

no code implementations23 Apr 2021 Soroush Nasiriany, Vitchyr H. Pong, Ashvin Nair, Alexander Khazatsky, Glen Berseth, Sergey Levine

Contextual policies provide this capability in principle, but the representation of the context determines the degree of generalization and expressivity.

Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models

1 code implementation21 Apr 2021 Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine

Humans have a remarkable ability to make decisions by accurately reasoning about future events, including the future behaviors and states of mind of other agents.

Outcome-Driven Reinforcement Learning via Variational Inference

no code implementations NeurIPS 2021 Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.

Variational Inference

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

no code implementations16 Apr 2021 Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman

In this paper, we study how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously, sharing exploration, experience, and representations across tasks.

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

no code implementations15 Apr 2021 Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine

We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data.

Q-Learning

Rapid Exploration for Open-World Navigation with Latent Goal Models

no code implementations12 Apr 2021 Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine

We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.

Autonomous Navigation

AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control

no code implementations5 Apr 2021 Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa

Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips.

Imitation Learning

Benchmarks for Deep Off-Policy Evaluation

3 code implementations ICLR 2021 Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Tom Le Paine

Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making.

Continuous Control Decision Making +1

Accelerating Online Reinforcement Learning via Model-Based Meta-Learning

no code implementations ICLR Workshop Learning_to_Learn 2021 John D Co-Reyes, Sarah Feng, Glen Berseth, Jie Qui, Sergey Levine

Current reinforcement learning algorithms struggle to quickly adapt to new situations without large amounts of experience and usually without large amounts of optimization over that experience.

Meta-Learning

Maximum Entropy RL (Provably) Solves Some Robust RL Problems

no code implementations10 Mar 2021 Benjamin Eysenbach, Sergey Levine

Many potential applications of reinforcement learning (RL) require guarantees that the agent will perform well in the face of disturbances to the dynamics or reward function.

Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation

no code implementations ICLR 2021 Justin Fu, Sergey Levine

We propose to tackle this problem by leveraging the normalized maximum-likelihood (NML) estimator, which provides a principled approach to handling uncertainty and out-of-distribution inputs.

COMBO: Conservative Offline Model-Based Policy Optimization

2 code implementations NeurIPS 2021 Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn

We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model.

Offline RL

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

no code implementations4 Feb 2021 Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains.

Evolving Reinforcement Learning Algorithms

3 code implementations ICLR 2021 John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust

Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm.

Atari Games Meta-Learning

Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments

no code implementations ICLR 2021 Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Charles Blundell, Sergey Levine, Yoshua Bengio, Michael Curtis Mozer

To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e. g., health, position).

Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples

no code implementations1 Jan 2021 Kevin Li, Abhishek Gupta, Vitchyr H. Pong, Ashwin Reddy, Aurick Zhou, Justin Yu, Sergey Levine

In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states.

Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization

no code implementations1 Jan 2021 Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine

To address this problem, we present Design-Bench, a benchmark suite of offline MBO tasks with a unified evaluation protocol and reference implementations of recent methods.

On Trade-offs of Image Prediction in Visual Model-Based Reinforcement Learning

no code implementations1 Jan 2021 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Danijar Hafner, Dumitru Erhan, Harini Kannan, Chelsea Finn, Sergey Levine

In this paper, we study a number of design decisions for the predictive model in visual MBRL algorithms, focusing specifically on methods that use a predictive model for planning.

Model-based Reinforcement Learning

Invariant Representations for Reinforcement Learning without Reconstruction

no code implementations ICLR 2021 Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

Causal Inference Representation Learning

Variable-Shot Adaptation for Incremental Meta-Learning

no code implementations1 Jan 2021 Tianhe Yu, Xinyang Geng, Chelsea Finn, Sergey Levine

Few-shot meta-learning methods consider the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.

Meta-Learning Zero-Shot Learning

Model-Based Visual Planning with Self-Supervised Functional Distances

1 code implementation ICLR 2021 Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine

In our experiments, we find that our method can successfully learn models that perform a variety of tasks at test-time, moving objects amid distractors with a simulated robotic arm and even learning to open and close a drawer using a real-world robot.

ViNG: Learning Open-World Navigation with Visual Goals

no code implementations17 Dec 2020 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform.

Variable-Shot Adaptation for Online Meta-Learning

no code implementations14 Dec 2020 Tianhe Yu, Xinyang Geng, Chelsea Finn, Sergey Levine

Few-shot meta-learning methods consider the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.

Meta-Learning Zero-Shot Learning

Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual Model-Based Reinforcement Learning

1 code implementation8 Dec 2020 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Danijar Hafner, Harini Kannan, Chelsea Finn, Sergey Levine, Dumitru Erhan

In this paper, we study a number of design decisions for the predictive model in visual MBRL algorithms, focusing specifically on methods that use a predictive model for planning.

Model-based Reinforcement Learning

Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction

1 code implementation NeurIPS 2020 Michael Janner, Igor Mordatch, Sergey Levine

We introduce the gamma-model, a predictive model of environment dynamics with an infinite, probabilistic horizon.

Continual Learning of Control Primitives : Skill Discovery via Reset-Games

no code implementations NeurIPS 2020 Kelvin Xu, Siddharth Verma, Chelsea Finn, Sergey Levine

First, in real world settings, when an agent attempts a tasks and fails, the environment must somehow "reset" so that the agent can attempt the task again.

Continual Learning

Parrot: Data-Driven Behavioral Priors for Reinforcement Learning

no code implementations ICLR 2021 Avi Singh, Huihan Liu, Gaoyue Zhou, Albert Yu, Nicholas Rhinehart, Sergey Levine

Reinforcement learning provides a general framework for flexible decision making and control, but requires extensive data collection for each new task that an agent needs to learn.

Decision Making

C-Learning: Learning to Achieve Goals via Recursive Classification

no code implementations ICLR 2021 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

This problem, which can be viewed as a reframing of goal-conditioned reinforcement learning (RL), is centered around learning a conditional probability density function over future states.

Density Estimation General Classification +1

Reinforcement Learning with Videos: Combining Offline Observations with Interaction

1 code implementation12 Nov 2020 Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, Chelsea Finn

In this paper, we consider the question: can we perform reinforcement learning directly on experience collected by humans?

Continual Learning of Control Primitives: Skill Discovery via Reset-Games

1 code implementation10 Nov 2020 Kelvin Xu, Siddharth Verma, Chelsea Finn, Sergey Levine

Reinforcement learning has the potential to automate the acquisition of behavior in complex settings, but in order for it to be successfully deployed, a number of practical challenges must be addressed.

Continual Learning

Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation

no code implementations5 Nov 2020 Aurick Zhou, Sergey Levine

In this paper, we propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation, calibration, and out-of-distribution robustness with deep networks.

Bayesian Inference

Conservative Safety Critics for Exploration

no code implementations ICLR 2021 Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg

Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning.

Safe Exploration

Generative Temporal Difference Learning for Infinite-Horizon Prediction

1 code implementation27 Oct 2020 Michael Janner, Igor Mordatch, Sergey Levine

We introduce the $\gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.

One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL

no code implementations NeurIPS 2020 Saurabh Kumar, Aviral Kumar, Sergey Levine, Chelsea Finn

While reinforcement learning algorithms can learn effective policies for complex tasks, these policies are often brittle to even minor task variations, especially when variations are not explicitly provided during training.

COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning

1 code implementation27 Oct 2020 Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine

Reinforcement learning has been applied to a wide variety of robotics problems, but most of such applications involve collecting data from scratch for each new task.

Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning

no code implementations ICLR 2021 Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, Sergey Levine

We identify an implicit under-parameterization phenomenon in value-based deep RL methods that use bootstrapping: when value functions, approximated using deep neural networks, are trained with gradient descent using iterated regression onto target values generated by previous instances of the value network, more gradient updates decrease the expressivity of the current value network.

MELD: Meta-Reinforcement Learning from Images via Latent State Models

1 code implementation26 Oct 2020 Tony Z. Zhao, Anusha Nagabandi, Kate Rakelly, Chelsea Finn, Sergey Levine

Meta-reinforcement learning algorithms can enable autonomous agents, such as robots, to quickly acquire new behaviors by leveraging prior experience in a set of related training tasks.

Meta-Learning Meta Reinforcement Learning +1

OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning

no code implementations ICLR 2021 Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum

Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent's ability to query the environment for transitions and rewards is effectively unlimited.

Few-Shot Imitation Learning Imitation Learning +1

LaND: Learning to Navigate from Disengagements

1 code implementation9 Oct 2020 Gregory Kahn, Pieter Abbeel, Sergey Levine

However, we believe that these disengagements not only show where the system fails, which is useful for troubleshooting, but also provide a direct learning signal by which the robot can learn to navigate.

Autonomous Navigation Imitation Learning

Emergent Social Learning via Multi-agent Reinforcement Learning

no code implementations1 Oct 2020 Kamal Ndousse, Douglas Eck, Sergey Levine, Natasha Jaques

We analyze the reasons for this deficiency, and show that by imposing constraints on the training environment and introducing a model-based auxiliary loss we are able to obtain generalized social learning policies which enable agents to: i) discover complex skills that are not learned from single-agent training, and ii) adapt online to novel environments by taking cues from experts present in the new environment.

Imitation Learning Multi-agent Reinforcement Learning

Amortized Conditional Normalized Maximum Likelihood

no code implementations28 Sep 2020 Aurick Zhou, Sergey Levine

In this paper, we propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation, calibration, and out-of-distribution robustness with deep networks.

Bayesian Inference

Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift

no code implementations28 Sep 2020 Marvin Mengxin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.

Image Classification Meta-Learning

Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings

1 code implementation ICML 2020 Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, Dinesh Jayaraman

Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous, imperiling the RL agent, other agents, and the environment.

Offline Meta-Reinforcement Learning with Advantage Weighting

no code implementations13 Aug 2020 Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn

That is, in offline meta-RL, we meta-train on fixed, pre-collected data from several tasks in order to adapt to a new task with a very small amount (less than 5 trajectories) of data from the new task.

Machine Translation Meta-Learning +3

Assisted Perception: Optimizing Observations to Communicate State

1 code implementation6 Aug 2020 Siddharth Reddy, Sergey Levine, Anca D. Dragan

We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases -- bandwidth-limited image classification and a driving video game with observation delay -- and two with unknown biases that our method has to learn -- guided 2D navigation and a lunar lander teleoperation video game.

Image Classification

Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions

no code implementations5 Jul 2020 Michael Chang, Sidhant Kaushik, S. Matthew Weinberg, Thomas L. Griffiths, Sergey Levine

This paper seeks to establish a framework for directing a society of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems.

Decision Making Transfer Learning

Object Files and Schemata: Factorizing Declarative and Procedural Knowledge in Dynamical Systems

no code implementations29 Jun 2020 Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Sergey Levine, Charles Blundell, Yoshua Bengio, Michael Mozer

To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e. g., health, position).

Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?

2 code implementations ICML 2020 Angelos Filos, Panagiotis Tigas, Rowan Mcallister, Nicholas Rhinehart, Sergey Levine, Yarin Gal

Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions.

Autonomous Vehicles

Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers

1 code implementation ICLR 2021 Benjamin Eysenbach, Swapnil Asawa, Shreyas Chaudhari, Sergey Levine, Ruslan Salakhutdinov

Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function.

Continuous Control Domain Adaptation

Simple and Effective VAE Training with Calibrated Decoders

1 code implementation23 Jun 2020 Oleh Rybkin, Kostas Daniilidis, Sergey Levine

We perform the first comprehensive comparative analysis of calibrated decoder and provide recommendations for simple and effective VAE training.

Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors

1 code implementation NeurIPS 2020 Karl Pertsch, Oleh Rybkin, Frederik Ebert, Chelsea Finn, Dinesh Jayaraman, Sergey Levine

In this work we propose a framework for visual prediction and planning that is able to overcome both of these limitations.

Ecological Reinforcement Learning

no code implementations22 Jun 2020 John D. Co-Reyes, Suvansh Sanjeev, Glen Berseth, Abhishek Gupta, Sergey Levine

Much of the current work on reinforcement learning studies episodic settings, where the agent is reset between trials to an initial state distribution, often with well-shaped reward functions.

Learning Invariant Representations for Reinforcement Learning without Reconstruction

1 code implementation18 Jun 2020 Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

Causal Inference Representation Learning

RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real

no code implementations CVPR 2020 Kanishka Rao, Chris Harris, Alex Irpan, Sergey Levine, Julian Ibarz, Mohi Khansari

However, this sort of translation is typically task-agnostic, in that the translated images may not preserve all features that are relevant to the task.

Robotic Grasping Translation

AWAC: Accelerating Online Reinforcement Learning with Offline Datasets

1 code implementation16 Jun 2020 Ashvin Nair, Abhishek Gupta, Murtaza Dalal, Sergey Levine

If we can instead allow RL algorithms to effectively use previously collected data to aid the online learning process, such applications could be made substantially more practical: the prior data would provide a starting point that mitigates challenges due to exploration and sample complexity, while the online training enables the agent to perfect the desired skill.

Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation

no code implementations ICML Workshop LifelongML 2020 Ryan Julian, Benjamin Swanson, Gaurav S. Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman

One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments, but most robot learning systems today are deployed as fixed policies which do not adapt after deployment.

Continual Learning Robotic Grasping

Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling

no code implementations12 Jun 2020 Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine

Our method is based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, more easily than policies and value functions.

Meta Reinforcement Learning

Conservative Q-Learning for Offline Reinforcement Learning

9 code implementations NeurIPS 2020 Aviral Kumar, Aurick Zhou, George Tucker, Sergey Levine

We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.

Continuous Control DQN Replay Dataset +1

Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems

1 code implementation4 May 2020 Sergey Levine, Aviral Kumar, George Tucker, Justin Fu

In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection.

Decision Making

The Ingredients of Real World Robotic Reinforcement Learning

no code implementations ICLR 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.

Model Based Reinforcement Learning for Atari

no code implementations ICLR 2020 Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błażej Osiński, Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, Henryk Michalewski

We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting.

Atari Games Model-based Reinforcement Learning +1

Dynamics-Aware Unsupervised Skill Discovery

1 code implementation ICLR 2020 Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment.

Model-based Reinforcement Learning

Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks

no code implementations29 Apr 2020 Gerrit Schoettler, Ashvin Nair, Juan Aparicio Ojea, Sergey Levine, Eugen Solowjow

Robotic insertion tasks are characterized by contact and friction mechanics, making them challenging for conventional feedback control methods due to unmodeled physical effects.

Meta Reinforcement Learning

Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning

2 code implementations27 Apr 2020 Archit Sharma, Michael Ahn, Sergey Levine, Vikash Kumar, Karol Hausman, Shixiang Gu

Can we instead develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks?

Unsupervised Reinforcement Learning

The Ingredients of Real-World Robotic Reinforcement Learning

no code implementations27 Apr 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

In this work, we discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.

The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget

1 code implementation ICLR 2020 Anirudh Goyal, Yoshua Bengio, Matthew Botvinick, Sergey Levine

This is typically the case when we have a standard conditioning input, such as a state observation, and a "privileged" input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent.

Variational Inference

Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads

2 code implementations23 Apr 2020 Suneel Belkhale, Rachel Li, Gregory Kahn, Rowan Mcallister, Roberto Calandra, Sergey Levine

Our experiments demonstrate that our online adaptation approach outperforms non-adaptive methods on a series of challenging suspended payload transportation tasks.

Meta-Learning Meta Reinforcement Learning

Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning

no code implementations21 Apr 2020 Ryan Julian, Benjamin Swanson, Gaurav S. Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman

One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments.

Continual Learning Robotic Grasping

D4RL: Datasets for Deep Data-Driven Reinforcement Learning

3 code implementations15 Apr 2020 Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, Sergey Levine

In this work, we introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL.

Offline RL

Thinking While Moving: Deep Reinforcement Learning with Concurrent Control

no code implementations ICLR 2020 Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog

We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action.

Robotic Grasping

Learning Agile Robotic Locomotion Skills by Imitating Animals

no code implementations2 Apr 2020 Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan, Sergey Levine

In this work, we present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.

Domain Adaptation Imitation Learning +1

Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting

no code implementations18 Mar 2020 Xinshuo Weng, Jianren Wang, Sergey Levine, Kris Kitani, Nicholas Rhinehart

Through experiments on a robotic manipulation dataset and two driving datasets, we show that SPFNet is effective for the SPF task, our forecast-then-detect pipeline outperforms the detect-then-forecast approaches to which we compared, and that pose forecasting performance improves with the addition of unlabeled data.

Decision Making Future prediction +1

OmniTact: A Multi-Directional High Resolution Touch Sensor

1 code implementation16 Mar 2020 Akhil Padmanabha, Frederik Ebert, Stephen Tian, Roberto Calandra, Chelsea Finn, Sergey Levine

We compare with a state-of-the-art tactile sensor that is only sensitive on one side, as well as a state-of-the-art multi-directional tactile sensor, and find that OmniTact's combination of high-resolution and multi-directional sensing is crucial for reliably inserting the electrical connector and allows for higher accuracy in the state estimation task.

DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction

3 code implementations NeurIPS 2020 Aviral Kumar, Abhishek Gupta, Sergey Levine

We show that bootstrapping-based Q-learning algorithms do not necessarily benefit from this corrective feedback, and training on the experience collected by the algorithm is not sufficient to correct errors in the Q-function.

Meta-Learning Multi-Task Learning +1

Scalable Multi-Task Imitation Learning with Autonomous Improvement

no code implementations25 Feb 2020 Avi Singh, Eric Jang, Alexander Irpan, Daniel Kappler, Murtaza Dalal, Sergey Levine, Mohi Khansari, Chelsea Finn

In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation.

Imitation Learning

Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

1 code implementation NeurIPS 2020 Benjamin Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov

In this paper, we show that hindsight relabeling is inverse RL, an observation that suggests that we can use inverse RL in tandem for RL algorithms to efficiently solve many tasks.

Learning to Walk in the Real World with Minimal Human Effort

no code implementations20 Feb 2020 Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan

In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.

Legged Robots Multi-Task Learning

BADGR: An Autonomous Self-Supervised Learning-Based Navigation System

1 code implementation13 Feb 2020 Gregory Kahn, Pieter Abbeel, Sergey Levine

Mobile robot navigation is typically regarded as a geometric problem, in which the robot's objective is to perceive the geometry of the environment in order to plan collision-free paths towards a desired goal.

Robot Navigation Self-Supervised Learning

Gradient Surgery for Multi-Task Learning

7 code implementations NeurIPS 2020 Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn

While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge.

Image Classification Multi-Task Learning

Reward-Conditioned Policies

1 code implementation31 Dec 2019 Aviral Kumar, Xue Bin Peng, Sergey Levine

By then conditioning the policy on the numerical value of the reward, we can obtain a policy that generalizes to larger returns.

Imitation Learning

Model Inversion Networks for Model-Based Optimization

no code implementations NeurIPS 2020 Aviral Kumar, Sergey Levine

MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems.

Morphology-Agnostic Visual Robotic Control

no code implementations31 Dec 2019 Brian Yang, Dinesh Jayaraman, Glen Berseth, Alexei Efros, Sergey Levine

Existing approaches for visuomotor robotic control typically require characterizing the robot in advance by calibrating the camera or performing system identification.

Learning Predictive Models From Observation and Interaction

no code implementations ECCV 2020 Karl Schmeckpeper, Annie Xie, Oleh Rybkin, Stephen Tian, Kostas Daniilidis, Sergey Levine, Chelsea Finn

Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works, and then use this learned model to plan coordinated sequences of actions to bring about desired outcomes.

Learning to Reach Goals via Iterated Supervised Learning

2 code implementations ICLR 2021 Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach, Sergey Levine

Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards.

Multi-Goal Reinforcement Learning

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

no code implementations10 Dec 2019 Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine

In this paper, we study how these challenges can be alleviated with an automated robotic learning framework, in which multi-stage tasks are defined simply by providing videos of a human demonstrator and then learned autonomously by the robot from raw image observations.

Translation

Meta-Learning without Memorization

1 code implementation ICLR 2020 Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn

If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes.

Few-Shot Image Classification Meta-Learning

Unsupervised Curricula for Visual Meta-Reinforcement Learning

no code implementations NeurIPS 2019 Allan Jabri, Kyle Hsu, Ben Eysenbach, Abhishek Gupta, Sergey Levine, Chelsea Finn

In experiments on vision-based navigation and manipulation domains, we show that the algorithm allows for unsupervised meta-learning that transfers to downstream tasks specified by hand-crafted reward functions and serves as pre-training for more efficient supervised meta-learning of test task distributions.

Meta-Learning Meta Reinforcement Learning

Inter-Level Cooperation in Hierarchical Reinforcement Learning

1 code implementation5 Dec 2019 Abdul Rahman Kreidieh, Glen Berseth, Brandon Trabucco, Samyak Parajuli, Sergey Levine, Alexandre M. Bayen

This allows us to draw on connections between communication and cooperation in multi-agent RL, and demonstrate the benefits of increased cooperation between sub-policies on the training performance of the overall policy.

Hierarchical Reinforcement Learning

Learning Human Objectives by Evaluating Hypothetical Behavior

1 code implementation ICML 2020 Siddharth Reddy, Anca D. Dragan, Sergey Levine, Shane Legg, Jan Leike

To address this challenge, we propose an algorithm that safely and interactively learns a model of the user's reward function.

Car Racing

Compositional Plan Vectors

1 code implementation NeurIPS 2019 Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine

We show that CPVs can be learned within a one-shot imitation learning framework without any additional supervision or information about task hierarchy, and enable a demonstration-conditioned policy to generalize to tasks that sequence twice as many skills as the tasks seen during training.

Imitation Learning

Planning with Goal-Conditioned Policies

1 code implementation NeurIPS 2019 Soroush Nasiriany, Vitchyr H. Pong, Steven Lin, Sergey Levine

Planning methods can solve temporally extended sequential decision making problems by composing simple behaviors.

Decision Making Robot Navigation

Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control

no code implementations30 Oct 2019 Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine

We show that CPVs can be learned within a one-shot imitation learning framework without any additional supervision or information about task hierarchy, and enable a demonstration-conditioned policy to generalize to tasks that sequence twice as many skills as the tasks seen during training.

Imitation Learning

Entity Abstraction in Visual Model-Based Reinforcement Learning

1 code implementation28 Oct 2019 Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine

This paper tests the hypothesis that modeling a scene in terms of entities and their local interactions, as opposed to modeling the scene globally, provides a significant benefit in generalizing to physical tasks in a combinatorial space the learner has not encountered before.

Model-based Reinforcement Learning Object Discovery +2

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

1 code implementation25 Oct 2019 Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman

We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks.

Imitation Learning

Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning

5 code implementations24 Oct 2019 Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Avnish Narayan, Hayden Shively, Adithya Bellathur, Karol Hausman, Chelsea Finn, Sergey Levine

Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors.

Meta-Learning Meta Reinforcement Learning +1

RoboNet: Large-Scale Multi-Robot Learning

no code implementations24 Oct 2019 Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, Chelsea Finn

This leads to a frequent tension in robotic learning: how can we learn generalizable robotic controllers without having to collect impractically large amounts of data for each separate experiment?

Video Prediction

Contextual Imagined Goals for Self-Supervised Robotic Learning

1 code implementation23 Oct 2019 Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine

When the robot's environment and available objects vary, as they do in most open-world settings, the robot must propose to itself only those goals that it can accomplish in its present setting with the objects that are at hand.

If MaxEnt RL is the Answer, What is the Question?

no code implementations4 Oct 2019 Benjamin Eysenbach, Sergey Levine

In particular, we show (1) that MaxEnt RL can be used to solve a certain class of POMDPs, and (2) that MaxEnt RL is equivalent to a two-player game where an adversary chooses the reward function.

Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

5 code implementations1 Oct 2019 Xue Bin Peng, Aviral Kumar, Grace Zhang, Sergey Levine

In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines.

Continuous Control OpenAI Gym

ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots

1 code implementation25 Sep 2019 Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar

ROBEL introduces two robots, each aimed to accelerate reinforcement learning research in different task domains: D'Claw is a three-fingered hand robot that facilitates learning dexterous manipulation tasks, and D'Kitty is a four-legged robot that facilitates learning agile legged locomotion tasks.

Continuous Control

Consistent Meta-Reinforcement Learning via Model Identification and Experience Relabeling

no code implementations25 Sep 2019 Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine

Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large.

Meta Reinforcement Learning

Advantage Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

no code implementations25 Sep 2019 Xue Bin Peng, Aviral Kumar, Grace Zhang, Sergey Levine

In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines.

Continuous Control OpenAI Gym

Adaptive Adversarial Imitation Learning

no code implementations25 Sep 2019 Yiren Lu, Jonathan Tompson, Sergey Levine

We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain.

Imitation Learning

Mint: Matrix-Interleaving for Multi-Task Learning

no code implementations25 Sep 2019 Tianhe Yu, Saurabh Kumar, Eric Mitchell, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data.

Multi-Task Learning

Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents

no code implementations25 Sep 2019 Jesse Zhang, Brian Cheung, Chelsea Finn, Dinesh Jayaraman, Sergey Levine

We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?

Domain Adaptation Meta Reinforcement Learning

Learning to Reach Goals Without Reinforcement Learning

no code implementations25 Sep 2019 Dibya Ghosh, Abhishek Gupta, Justin Fu, Ashwin Reddy, Coline Devin, Benjamin Eysenbach, Sergey Levine

By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations.

Imitation Learning

Goal-Conditioned Video Prediction

no code implementations25 Sep 2019 Oleh Rybkin, Karl Pertsch, Frederik Ebert, Dinesh Jayaraman, Chelsea Finn, Sergey Levine

Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video.

Imitation Learning Video Generation +1

Deep Dynamics Models for Learning Dexterous Manipulation

2 code implementations25 Sep 2019 Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar

Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills.

Recurrent Independent Mechanisms

4 code implementations ICLR 2021 Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf

Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes.

Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?

no code implementations23 Sep 2019 Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine

Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks.

Hierarchical Reinforcement Learning

Scaled Autonomy: Enabling Human Operators to Control Robot Fleets

no code implementations22 Sep 2019 Gokul Swamy, Siddharth Reddy, Sergey Levine, Anca D. Dragan

We learn a model of the user's preferences from observations of the user's choices in easy settings with a few robots, and use it in challenging settings with more robots to automatically identify which robot the user would most likely choose to control, if they were able to evaluate the states of all robots at all times.

Robot Navigation

Meta-Learning with Implicit Gradients

4 code implementations NeurIPS 2019 Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine

By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer.

Few-Shot Image Classification

Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery

no code implementations ICLR 2020 Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine

We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples.

Dynamics-Aware Unsupervised Discovery of Skills

3 code implementations2 Jul 2019 Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment.

Model-based Reinforcement Learning

Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives

1 code implementation ICLR 2020 Anirudh Goyal, Shagun Sodhani, Jonathan Binas, Xue Bin Peng, Sergey Levine, Yoshua Bengio

Reinforcement learning agents that operate in diverse and complex environments can benefit from the structured decomposition of their behavior.

Hierarchical Reinforcement Learning

When to Trust Your Model: Model-Based Policy Optimization

10 code implementations NeurIPS 2019 Michael Janner, Justin Fu, Marvin Zhang, Sergey Levine

Designing effective model-based reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of model-generated data.

Model-based Reinforcement Learning

Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards

1 code implementation13 Jun 2019 Gerrit Schoettler, Ashvin Nair, Jianlan Luo, Shikhar Bahl, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine

Connector insertion and many other tasks commonly found in modern manufacturing settings involve complex contact dynamics and friction.