Search Results for author: Sergey Levine

Found 489 papers, 227 papers with code

Global Decision-Making via Local Economic Transactions

no code implementations ICML 2020 Michael Chang, Sid Kaushik, S. Matthew Weinberg, Sergey Levine, Thomas Griffiths

This paper seeks to establish a mechanism for directing a collection of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems with a central global objective.

Decision Making

Unfamiliar Finetuning Examples Control How Language Models Hallucinate

no code implementations8 Mar 2024 Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine

Large language models (LLMs) have a tendency to generate plausible-sounding yet factually incorrect responses, especially when queried on unfamiliar concepts.

Multiple-choice

Stop Regressing: Training Value Functions via Classification for Scalable Deep RL

no code implementations6 Mar 2024 Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taïga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, Aviral Kumar, Rishabh Agarwal

Observing this discrepancy, in this paper, we investigate whether the scalability of deep RL can also be improved simply by using classification in place of regression for training value functions.

Atari Games regression +1

MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting

no code implementations5 Mar 2024 Fangchen Liu, Kuan Fang, Pieter Abbeel, Sergey Levine

In this paper, we present MOKA (Marking Open-vocabulary Keypoint Affordances), an approach that employs VLMs to solve robotic manipulation tasks specified by free-form language descriptions.

In-Context Learning Question Answering +2

SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation

no code implementations1 Mar 2024 Noriaki Hirose, Dhruv Shah, Kyle Stachowicz, Ajay Sridhar, Sergey Levine

Specifically, SELFI stabilizes the online learning process by incorporating the same model-based learning objective from offline pre-training into the Q-values learned with online model-free reinforcement learning.

Collision Avoidance reinforcement-learning +2

ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL

1 code implementation29 Feb 2024 Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, Aviral Kumar

In this paper, we develop a framework for building multi-turn RL algorithms for fine-tuning LLMs, that preserves the flexibility of existing single-turn RL methods for LLMs (e. g., proximal policy optimization), while accommodating multiple turns, long horizons, and delayed rewards effectively.

Language Modelling Reinforcement Learning (RL)

Unsupervised Zero-Shot Reinforcement Learning via Functional Reward Encodings

1 code implementation27 Feb 2024 Kevin Frans, Seohong Park, Pieter Abbeel, Sergey Levine

Can we pre-train a generalist agent from a large amount of unlabeled offline trajectories such that it can be immediately adapted to any new downstream tasks in a zero-shot manner?

Offline RL reinforcement-learning

Feedback Efficient Online Fine-Tuning of Diffusion Models

no code implementations26 Feb 2024 Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Sergey Levine, Tommaso Biancalani

It is natural to frame this as a reinforcement learning (RL) problem, in which the objective is to fine-tune a diffusion model to maximize a reward function that corresponds to some property.

reinforcement-learning Reinforcement Learning (RL)

Foundation Policies with Hilbert Representations

1 code implementation23 Feb 2024 Seohong Park, Tobias Kreiman, Sergey Levine

While a number of methods have been proposed to enable generic self-supervised RL, based on principles such as goal-conditioned RL, behavioral cloning, and unsupervised skill learning, such methods remain limited in terms of either the diversity of the discovered behaviors, the need for high-quality demonstration data, or the lack of a clear prompting or adaptation mechanism for downstream tasks.

Reinforcement Learning (RL) Unsupervised Pre-training

Vision-Language Models Provide Promptable Representations for Reinforcement Learning

no code implementations5 Feb 2024 William Chen, Oier Mees, Aviral Kumar, Sergey Levine

We find that our policies trained on embeddings extracted from general-purpose VLMs outperform equivalent policies trained on generic, non-promptable image embeddings.

Instruction Following reinforcement-learning +3

Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control

no code implementations30 Jan 2024 Zhongyu Li, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, Koushil Sreenath

Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.

reinforcement-learning Reinforcement Learning (RL)

SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning

no code implementations29 Jan 2024 Jianlan Luo, Zheyuan Hu, Charles Xu, You Liang Tan, Jacob Berg, Archit Sharma, Stefan Schaal, Chelsea Finn, Abhishek Gupta, Sergey Levine

We posit that a significant challenge to widespread adoption of robotic RL, as well as further development of robotic RL methods, is the comparative inaccessibility of such methods.

reinforcement-learning Reinforcement Learning (RL)

Functional Graphical Models: Structure Enables Offline Data-Driven Optimization

no code implementations8 Jan 2024 Jakub Grudzien Kuba, Masatoshi Uehara, Pieter Abbeel, Sergey Levine

This kind of data-driven optimization (DDO) presents a range of challenges beyond those in standard prediction problems, since we need models that successfully predict the performance of new designs that are better than the best designs seen in the training set.

Chain of Code: Reasoning with a Language Model-Augmented Code Emulator

no code implementations7 Dec 2023 Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, Brian Ichter

For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable).

Language Modelling

LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models

1 code implementation30 Nov 2023 Marwa Abdulhai, Isadora White, Charlie Snell, Charles Sun, Joey Hong, Yuexiang Zhai, Kelvin Xu, Sergey Levine

Developing such algorithms requires tasks that can gauge progress on algorithm design, provide accessible and reproducible evaluations for multi-turn interactions, and cover a range of task properties and challenges in improving reinforcement learning algorithms.

reinforcement-learning Text Generation

RLIF: Interactive Imitation Learning as Reinforcement Learning

no code implementations21 Nov 2023 Jianlan Luo, Perry Dong, Yuexiang Zhai, Yi Ma, Sergey Levine

We also provide a unified framework to analyze our RL method and DAgger; for which we present the asymptotic analysis of the suboptimal gap for both methods as well as the non-asymptotic sample complexity bound of our method.

Continuous Control Imitation Learning +1

Accelerating Exploration with Unlabeled Prior Data

1 code implementation NeurIPS 2023 Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, Sergey Levine

Learning to solve tasks from a sparse reward signal is a major challenge for standard reinforcement learning (RL) algorithms.

Reinforcement Learning (RL)

Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations

no code implementations9 Nov 2023 Joey Hong, Sergey Levine, Anca Dragan

LLMs trained with supervised fine-tuning or "single-step" RL, as with standard RLHF, might struggle which tasks that require such goal-directed behavior, since they are not trained to optimize for overall conversational outcomes after multiple turns of interaction.

Text Generation

Adapt On-the-Go: Behavior Modulation for Single-Life Robot Deployment

no code implementations2 Nov 2023 Annie S. Chen, Govind Chada, Laura Smith, Archit Sharma, Zipeng Fu, Sergey Levine, Chelsea Finn

We provide theoretical analysis of our selection mechanism and demonstrate that ROAM enables a robot to adapt rapidly to changes in dynamics both in simulation and on a real Go1 quadruped, even successfully moving forward with roller skates on its feet.

Offline RL with Observation Histories: Analyzing and Improving Sample Complexity

no code implementations31 Oct 2023 Joey Hong, Anca Dragan, Sergey Levine

Theoretically, we show that standard offline RL algorithms conditioned on observation histories suffer from poor sample complexity, in accordance with the above intuition.

Autonomous Navigation Offline RL +1

Grow Your Limits: Continuous Improvement with Real-World RL for Robotic Locomotion

no code implementations26 Oct 2023 Laura Smith, YunHao Cao, Sergey Levine

Deep reinforcement learning (RL) can enable robots to autonomously acquire complex behaviors, such as legged locomotion.

Efficient Exploration Reinforcement Learning (RL)

Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning

no code implementations18 Oct 2023 Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine

We use a VQ-VAE to learn state-conditioned action quantization, avoiding the exponential blowup that comes with na\"ive discretization of the action space.

Offline RL Quantization +2

Latent Conservative Objective Models for Data-Driven Crystal Structure Prediction

no code implementations16 Oct 2023 Han Qi, Xinyang Geng, Stefano Rando, Iku Ohama, Aviral Kumar, Sergey Levine

In computational chemistry, crystal structure prediction (CSP) is an optimization problem that involves discovering the lowest energy stable crystal structure for a given chemical formula.

Formation Energy

Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning

no code implementations16 Oct 2023 Dhruv Shah, Michael Equi, Blazej Osinski, Fei Xia, Brian Ichter, Sergey Levine

Navigation in unfamiliar environments presents a major challenge for robots: while mapping and planning techniques can be used to build up a representation of the world, quickly discovering a path to a desired goal in unfamiliar settings with such methods often requires lengthy mapping and exploration.

Language Modelling Navigate

METRA: Scalable Unsupervised RL with Metric-Aware Abstraction

1 code implementation13 Oct 2023 Seohong Park, Oleh Rybkin, Sergey Levine

Through our experiments in five locomotion and manipulation environments, we demonstrate that METRA can discover a variety of useful behaviors even in complex, pixel-based environments, being the first unsupervised RL method that discovers diverse locomotion behaviors in pixel-based Quadruped and Humanoid.

Reinforcement Learning (RL) Unsupervised Pre-training +1

NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration

no code implementations11 Oct 2023 Ajay Sridhar, Dhruv Shah, Catherine Glossop, Sergey Levine

In this paper, we describe how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration, with the latter providing the ability to search novel environments, and the former providing the ability to reach a user-specified goal once it has been located.

Deep Neural Networks Tend To Extrapolate Predictably

1 code implementation2 Oct 2023 Katie Kang, Amrith Setlur, Claire Tomlin, Sergey Levine

Rather than extrapolating in arbitrary ways, we observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.

Decision Making

Robotic Offline RL from Internet Videos via Value-Function Pre-Training

no code implementations22 Sep 2023 Chethan Bhateja, Derek Guo, Dibya Ghosh, Anikait Singh, Manan Tomar, Quan Vuong, Yevgen Chebotar, Sergey Levine, Aviral Kumar

Our system, called V-PTR, combines the benefits of pre-training on video data with robotic offline RL approaches that train on diverse robot data, resulting in value functions and policies for manipulation tasks that perform better, act robustly, and generalize broadly.

Offline RL Reinforcement Learning (RL)

Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning

no code implementations7 Sep 2023 Jensen Gao, Siddharth Reddy, Glen Berseth, Anca D. Dragan, Sergey Levine

We further evaluate on a simulated Sawyer pushing task with eye gaze control, and the Lunar Lander game with simulated user commands, and find that our method improves over baseline interfaces in these domains as well.

Brain Computer Interface Decision Making +1

REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation

no code implementations6 Sep 2023 Zheyuan Hu, Aaron Rovinsky, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine

We demonstrate the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisition of intricate manipulation skills in the real world on a four-fingered robotic hand.

Imitation Learning Reinforcement Learning (RL)

A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning

1 code implementation24 Jul 2023 Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov

One-step methods perform regularization by doing just a single step of policy improvement, while critic regularization methods do many steps of policy improvement with a regularized objective.

Offline RL reinforcement-learning

Contrastive Example-Based Control

1 code implementation24 Jul 2023 Kyle Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn

In this paper, we propose a method for offline, example-based control that learns an implicit model of multi-step transitions, rather than a reward function.

Offline RL

HIQL: Offline Goal-Conditioned RL with Latent States as Actions

1 code implementation NeurIPS 2023 Seohong Park, Dibya Ghosh, Benjamin Eysenbach, Sergey Levine

This structure can be very useful, as assessing the quality of actions for nearby goals is typically easier than for more distant goals.

Reinforcement Learning (RL) Unsupervised Pre-training

Multi-Stage Cable Routing through Hierarchical Imitation Learning

no code implementations18 Jul 2023 Jianlan Luo, Charles Xu, Xinyang Geng, Gilbert Feng, Kuan Fang, Liam Tan, Stefan Schaal, Sergey Levine

In such settings, learning individual primitives for each stage that succeed with a high enough rate to perform a complete temporally extended task is impractical: if each stage must be completed successfully and has a non-negligible probability of failure, the likelihood of successful completion of the entire task becomes negligible.

Imitation Learning

Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control

no code implementations30 Jun 2023 Vivek Myers, Andre He, Kuan Fang, Homer Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine

Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image, but rather to the desired change between the start and goal images that the instruction corresponds to.

Instruction Following

ViNT: A Foundation Model for Visual Navigation

no code implementations26 Jun 2023 Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine

In this paper, we describe the Visual Navigation Transformer (ViNT), a foundation model that aims to bring the success of general-purpose pre-trained models to vision-based robotic navigation.

Visual Navigation

Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts

no code implementations19 Jun 2023 Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn

Effective machine learning models learn both robust features that directly determine the outcome of interest (e. g., an object with wheels is more likely to be a car), and shortcut features (e. g., an object on a road is more likely to be a car).

Model Selection

Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data

1 code implementation6 Jun 2023 Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine

Robotic systems that rely primarily on self-supervised learning have the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.

Contrastive Learning Data Augmentation +2

SACSoN: Scalable Autonomous Control for Social Navigation

no code implementations2 Jun 2023 Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine

By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.

Continual Learning counterfactual +3

The False Promise of Imitating Proprietary LLMs

2 code implementations25 May 2023 Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao liu, Pieter Abbeel, Sergey Levine, Dawn Song

This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model.

Language Modelling

Training Diffusion Models with Reinforcement Learning

1 code implementation22 May 2023 Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine

However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness.

Decision Making Denoising +2

Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware

no code implementations23 Apr 2023 Tony Z. Zhao, Vikash Kumar, Sergey Levine, Chelsea Finn

Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots because they require precision, careful coordination of contact forces, and closed-loop visual feedback.

Chunking Imitation Learning

IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies

1 code implementation20 Apr 2023 Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, Sergey Levine

In this paper, we reinterpret IQL as an actor-critic method by generalizing the critic objective and connecting it to a behavior-regularized implicit actor.

Offline RL Q-Learning

Efficient Deep Reinforcement Learning Requires Regulating Overfitting

no code implementations20 Apr 2023 Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine

Deep reinforcement learning algorithms that learn policies by trial-and-error must learn from limited amounts of data collected by actively interacting with the environment.

Model Selection reinforcement-learning

FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing

no code implementations19 Apr 2023 Kyle Stachowicz, Dhruv Shah, Arjun Bhorkar, Ilya Kostrikov, Sergey Levine

We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL).

Reinforcement Learning (RL)

Learning and Adapting Agile Locomotion Skills by Transferring Experience

no code implementations19 Apr 2023 Laura Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine

Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running.

Reinforcement Learning (RL)

Reinforcement Learning from Passive Data via Latent Intentions

1 code implementation10 Apr 2023 Dibya Ghosh, Chethan Bhateja, Sergey Levine

Passive observational data, such as human videos, is abundant and rich in information, yet remains largely untapped by current RL methods.

reinforcement-learning Value prediction

Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement

no code implementations20 Mar 2023 Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang

Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations.

Ignorance is Bliss: Robust Control via Information Gating

no code implementations NeurIPS 2023 Manan Tomar, Riashat Islam, Matthew E. Taylor, Sergey Levine, Philip Bachman

We propose \textit{information gating} as a way to learn parsimonious representations that identify the minimal information required for a task.

Inductive Bias Q-Learning

Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning

2 code implementations NeurIPS 2023 Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine

Our approach, calibrated Q-learning (Cal-QL), accomplishes this by learning a conservative value function initialization that underestimates the value of the learned policy from offline data, while also being calibrated, in the sense that the learned Q-values are at a reasonable scale.

Offline RL Q-Learning +1

Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents

no code implementations NeurIPS 2023 Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter

Recent progress in large language models (LLMs) has demonstrated the ability to learn and leverage Internet-scale knowledge through pre-training with autoregressive models.

Language Modelling Text Generation

Robust and Versatile Bipedal Jumping Control through Reinforcement Learning

no code implementations19 Feb 2023 Zhongyu Li, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, Koushil Sreenath

This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world.

reinforcement-learning Reinforcement Learning (RL)

Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

no code implementations10 Feb 2023 Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn

Transfer learning with a small amount of target data is an effective and common approach to adapting a pre-trained model to distribution shifts.

Domain Adaptation Transfer Learning

Predictable MDP Abstraction for Unsupervised Model-Based RL

2 code implementations8 Feb 2023 Seohong Park, Sergey Levine

A key component of model-based reinforcement learning (RL) is a dynamics model that predicts the outcomes of actions.

Model-based Reinforcement Learning Reinforcement Learning (RL)

Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts

1 code implementation6 Feb 2023 Amrith Setlur, Don Dennis, Benjamin Eysenbach, aditi raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine

Some robust training algorithms (e. g., Group DRO) specialize to group shifts and require group information on all training points.

Understanding the Complexity Gains of Single-Task RL with a Curriculum

no code implementations24 Dec 2022 Qiyang Li, Yuexiang Zhai, Yi Ma, Sergey Levine

Under mild regularity conditions on the curriculum, we show that sequentially solving each task in the multi-task RL problem is more computationally efficient than solving the original single-task problem, without any explicit exploration bonuses or other exploration strategies.

Reinforcement Learning (RL)

Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios

no code implementations21 Dec 2022 Yiren Lu, Justin Fu, George Tucker, Xinlei Pan, Eli Bronstein, Rebecca Roelofs, Benjamin Sapp, Brandyn White, Aleksandra Faust, Shimon Whiteson, Dragomir Anguelov, Sergey Levine

To our knowledge, this is the first application of a combined imitation and reinforcement learning approach in autonomous driving that utilizes large amounts of real-world human driving data.

Autonomous Driving Imitation Learning +2

Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance

no code implementations19 Dec 2022 Kelvin Xu, Zheyuan Hu, Ria Doshi, Aaron Rovinsky, Vikash Kumar, Abhishek Gupta, Sergey Levine

In this paper, we describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks and enable robots with complex multi-fingered hands to learn to perform them through interaction.

reinforcement-learning Reinforcement Learning (RL)

Offline Reinforcement Learning for Visual Navigation

1 code implementation16 Dec 2022 Dhruv Shah, Arjun Bhorkar, Hrish Leen, Ilya Kostrikov, Nick Rhinehart, Sergey Levine

Reinforcement learning can enable robots to navigate to distant goals while optimizing user-specified reward functions, including preferences for following lanes, staying on paved paths, or avoiding freshly mowed grass.

Navigate Offline RL +3

Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results

no code implementations13 Dec 2022 Sergey Levine, Dhruv Shah

Navigation is one of the most heavily studied problems in robotics, and is conventionally approached as a geometric mapping and planning problem.

Confidence-Conditioned Value Functions for Offline Reinforcement Learning

no code implementations8 Dec 2022 Joey Hong, Aviral Kumar, Sergey Levine

This approach can be implemented in practice by conditioning the Q-function from existing conservative algorithms on the confidence. We theoretically show that our learned value functions produce conservative estimates of the true value at any desired confidence.

Offline RL reinforcement-learning +1

Multi-Task Imitation Learning for Linear Dynamical Systems

no code implementations1 Dec 2022 Thomas T. Zhang, Katie Kang, Bruce D. Lee, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni

In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class.

Imitation Learning Representation Learning

Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes

no code implementations28 Nov 2022 Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine

The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP.

Offline RL Q-Learning +2

Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models

no code implementations21 Nov 2022 Ted Xiao, Harris Chan, Pierre Sermanet, Ayzaan Wahid, Anthony Brohan, Karol Hausman, Sergey Levine, Jonathan Tompson

To accomplish this, we introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL): we utilize semi-supervised language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabelled demonstration data and then train language-conditioned policies on the augmented datasets.

Imitation Learning

Data-Driven Offline Decision-Making via Invariant Representation Learning

no code implementations21 Nov 2022 Han Qi, Yi Su, Aviral Kumar, Sergey Levine

The goal in offline data-driven decision-making is synthesize decisions that optimize a black-box utility function, using a previously-collected static dataset, with no active interaction.

Decision Making Domain Adaptation +2

Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints

no code implementations2 Nov 2022 Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine

Both theoretically and empirically, we show that typical offline RL methods, which are based on distribution constraints fail to learn from data with such non-uniform variability, due to the requirement to stay close to the behavior policy to the same extent across the state space.

Atari Games Offline RL +2

Dual Generator Offline Reinforcement Learning

no code implementations2 Nov 2022 Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar

In this paper, we show that the issue of conflicting objectives can be resolved by training two generators: one that maximizes return, with the other capturing the ``remainder'' of the data distribution in the offline dataset, such that the mixture of the two is close to the behavior policy.

Offline RL reinforcement-learning +1

Adversarial Policies Beat Superhuman Go AIs

2 code implementations1 Nov 2022 Tony T. Wang, Adam Gleave, Tom Tseng, Kellin Pelrine, Nora Belrose, Joseph Miller, Michael D. Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, Stuart Russell

The core vulnerability uncovered by our attack persists even in KataGo agents adversarially trained to defend against our attack.

Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision

no code implementations27 Oct 2022 Ashvin Nair, Brian Zhu, Gokul Narayanan, Eugen Solowjow, Sergey Levine

One of the main observations we make in this work is that, with a suitable representation learning and domain generalization approach, it can be significantly easier for the reward function to generalize to a new but structurally similar task (e. g., inserting a new type of connector) than for the policy.

Domain Generalization Representation Learning

Towards Better Few-Shot and Finetuning Performance with Forgetful Causal Language Models

no code implementations24 Oct 2022 Hao liu, Xinyang Geng, Lisa Lee, Igor Mordatch, Sergey Levine, Sharan Narang, Pieter Abbeel

Large language models (LLM) trained using the next-token-prediction objective, such as GPT3 and PaLM, have revolutionized natural language processing in recent years by showing impressive zero-shot and few-shot capabilities across a wide range of tasks.

Language Modelling Natural Language Inference +1

Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity

no code implementations18 Oct 2022 Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine

Reinforcement learning provides an automated framework for learning behaviors from high-level reward specifications, but in practice the choice of reward function can be crucial for good results -- while in principle the reward only needs to specify what the task is, in reality practitioners often need to design more detailed rewards that provide the agent with some hints about how the task should be completed.

reinforcement-learning Reinforcement Learning (RL)

You Only Live Once: Single-Life Reinforcement Learning

no code implementations17 Oct 2022 Annie S. Chen, Archit Sharma, Sergey Levine, Chelsea Finn

We formalize this problem setting, which we call single-life reinforcement learning (SLRL), where an agent must complete a task within a single episode without interventions, utilizing its prior experience while contending with some form of novelty.

Continuous Control reinforcement-learning +1

Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks

no code implementations12 Oct 2022 Kuan Fang, Patrick Yin, Ashvin Nair, Homer Walke, Gengchen Yan, Sergey Levine

The utilization of broad datasets has proven to be crucial for generalization for a wide range of fields.

Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials

1 code implementation11 Oct 2022 Aviral Kumar, Anikait Singh, Frederik Ebert, Mitsuhiko Nakamoto, Yanlai Yang, Chelsea Finn, Sergey Levine

To our knowledge, PTR is the first RL method that succeeds at learning new tasks in a new domain on a real WidowX robot with as few as 10 task demonstrations, by effectively leveraging an existing dataset of diverse multi-task robot data collected in a variety of toy kitchens.

Offline RL Q-Learning +1

GNM: A General Navigation Model to Drive Any Robot

1 code implementation7 Oct 2022 Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine

Learning provides a powerful tool for vision-based navigation, but the capabilities of learning-based policies are constrained by limited training data.

Distributionally Adaptive Meta Reinforcement Learning

no code implementations6 Oct 2022 Anurag Ajay, Abhishek Gupta, Dibya Ghosh, Sergey Levine, Pulkit Agrawal

In this work, we develop a framework for meta-RL algorithms that are able to behave appropriately under test-time distribution shifts in the space of tasks.

Meta Reinforcement Learning reinforcement-learning +1

Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective

no code implementations18 Sep 2022 Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov

In this work, we propose a single objective which jointly optimizes a latent-space model and policy to achieve high returns while remaining self-consistent.

Reinforcement Learning (RL) Value prediction

GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots

1 code implementation12 Sep 2022 Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath, Sergey Levine

In this work, we introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.

A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning

1 code implementation16 Aug 2022 Laura Smith, Ilya Kostrikov, Sergey Levine

Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments that do not require domain knowledge.

reinforcement-learning Reinforcement Learning (RL)

Basis for Intentions: Efficient Inverse Reinforcement Learning using Past Experience

1 code implementation9 Aug 2022 Marwa Abdulhai, Natasha Jaques, Sergey Levine

IRL can provide a generalizable and compact representation for apprenticeship learning, and enable accurately inferring the preferences of a human in order to assist them.

reinforcement-learning Reinforcement Learning (RL)

Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills using a Quadrupedal Robot

no code implementations1 Aug 2022 Yandong Ji, Zhongyu Li, Yinan Sun, Xue Bin Peng, Sergey Levine, Glen Berseth, Koushil Sreenath

Developing algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task.

Friction Hierarchical Reinforcement Learning +3

LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action

1 code implementation10 Jul 2022 Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine

Goal-conditioned policies for robotic navigation can be trained on large, unannotated datasets, providing for good generalization to real-world settings.

Instruction Following Language Modelling

Offline RL Policies Should be Trained to be Adaptive

no code implementations5 Jul 2022 Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine

Offline RL algorithms must account for the fact that the dataset they are provided may leave many facets of the environment unknown.

Offline RL

Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

1 code implementation2 Jul 2022 Michael Chang, Thomas L. Griffiths, Sergey Levine

Iterative refinement -- start with a random guess, then iteratively improve the guess -- is a useful paradigm for representation learning because it offers a way to break symmetries among equally plausible explanations for the data.

Representation Learning

Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control

no code implementations21 Jun 2022 Katie Kang, Paula Gradu, Jason Choi, Michael Janner, Claire Tomlin, Sergey Levine

Learned models and policies can generalize effectively when evaluated within the distribution of the training data, but can produce unpredictable and erroneous outputs on out-of-distribution inputs.

Density Estimation

Contrastive Learning as Goal-Conditioned Reinforcement Learning

no code implementations15 Jun 2022 Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, Sergey Levine

While deep RL should automatically acquire such good representations, prior work often finds that learning representations in an end-to-end fashion is unstable and instead equip RL algorithms with additional representation learning parts (e. g., auxiliary losses, data augmentation).

Contrastive Learning Data Augmentation +4

Imitating Past Successes can be Very Suboptimal

no code implementations7 Jun 2022 Benjamin Eysenbach, Soumith Udatha, Sergey Levine, Ruslan Salakhutdinov

Prior work has proposed a simple strategy for reinforcement learning (RL): label experience with the outcomes achieved in that experience, and then imitate the relabeled experience.

Imitation Learning Reinforcement Learning (RL)

Adversarial Unlearning: Reducing Confidence Along Adversarial Directions

no code implementations3 Jun 2022 Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine

Supervised learning methods trained with maximum likelihood objectives often overfit on training data.

Data Augmentation

Multimodal Masked Autoencoders Learn Transferable Representations

1 code implementation27 May 2022 Xinyang Geng, Hao liu, Lisa Lee, Dale Schuurmans, Sergey Levine, Pieter Abbeel

We provide an empirical study of M3AE trained on a large-scale image-text dataset, and find that M3AE is able to learn generalizable representations that transfer well to downstream tasks.

Contrastive Learning

First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization

1 code implementation24 May 2022 Siddharth Reddy, Sergey Levine, Anca D. Dragan

How can we train an assistive human-machine interface (e. g., an electromyography-based limb prosthesis) to translate a user's raw command signals into the actions of a robot or computer when there is no prior mapping, we cannot ask the user for supervision in the form of action labels or reward feedback, and we do not have prior knowledge of the tasks the user is trying to accomplish?

Planning with Diffusion for Flexible Behavior Synthesis

2 code implementations20 May 2022 Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine

Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers.

Decision Making Denoising +2

Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space

no code implementations17 May 2022 Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine

Our experimental results show that PTP can generate feasible sequences of subgoals that enable the policy to efficiently solve the target tasks.

reinforcement-learning Reinforcement Learning (RL)

ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters

no code implementations4 May 2022 Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, Sanja Fidler

By leveraging a massively parallel GPU-based simulator, we are able to train skill embeddings using over a decade of simulated experiences, enabling our model to learn a rich and versatile repertoire of skills.

Imitation Learning Unsupervised Reinforcement Learning

Control-Aware Prediction Objectives for Autonomous Driving

no code implementations28 Apr 2022 Rowan Mcallister, Blake Wulfe, Jean Mercat, Logan Ellis, Sergey Levine, Adrien Gaidon

Autonomous vehicle software is typically structured as a modular pipeline of individual components (e. g., perception, prediction, and planning) to help separate concerns into interpretable sub-tasks.

Autonomous Driving Trajectory Prediction

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning

no code implementations27 Apr 2022 Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine

We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks.

reinforcement-learning Reinforcement Learning (RL)

CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning

2 code implementations NAACL 2022 Siddharth Verma, Justin Fu, Mengjiao Yang, Sergey Levine

Conventionally, generation of natural language for dialogue agents may be viewed as a statistical learning problem: determine the patterns in human-provided data and generate appropriate responses with similar statistical properties.

Chatbot Offline RL +2

INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL

no code implementations ICLR 2022 Homanga Bharadhwaj, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine

We propose a modified objective for model-based RL that, in combination with mutual information maximization, allows us to learn representations and dynamics for visual model-based RL without reconstruction in a way that explicitly prioritizes functionally relevant factors.

Model-based Reinforcement Learning Reinforcement Learning (RL)

When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?

no code implementations12 Apr 2022 Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine

To answer this question, we characterize the properties of environments that allow offline RL methods to perform better than BC methods, even when only provided with expert data.

Atari Games Imitation Learning +3

Jump-Start Reinforcement Learning

no code implementations5 Apr 2022 Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman

In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.

reinforcement-learning Reinforcement Learning (RL)

ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints

no code implementations23 Feb 2022 Dhruv Shah, Sergey Levine

In this work, we propose an approach that integrates learning and planning, and can utilize side information such as schematic roadmaps, satellite maps and GPS coordinates as a planning heuristic, without relying on them being accurate.

3D Reconstruction General Knowledge +1

Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization

3 code implementations17 Feb 2022 Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine

To address this, we present Design-Bench, a benchmark for offline MBO with a unified evaluation protocol and reference implementations of recent methods.

ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning

no code implementations5 Feb 2022 Sean Chen, Jensen Gao, Siddharth Reddy, Glen Berseth, Anca D. Dragan, Sergey Levine

Building assistive interfaces for controlling robots through arbitrary, high-dimensional, noisy inputs (e. g., webcam images of eye gaze) can be challenging, especially when it involves inferring the user's desired action in the absence of a natural 'default' interface.

reinforcement-learning Reinforcement Learning (RL)

BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning

no code implementations4 Feb 2022 Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn

In this paper, we study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks, a long-standing challenge in robot learning.

Imitation Learning

Fully Online Meta-Learning Without Task Boundaries

no code implementations1 Feb 2022 Jathushan Rajasegaran, Chelsea Finn, Sergey Levine

In this paper, we study how meta-learning can be applied to tackle online problems of this nature, simultaneously adapting to changing tasks and input distributions and meta-training the model in order to adapt more quickly in the future.

Meta-Learning

RvS: What is Essential for Offline RL via Supervised Learning?

1 code implementation20 Dec 2021 Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine

Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.

Offline RL

Autonomous Reinforcement Learning: Formalism and Benchmarking

2 code implementations ICLR 2022 Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

In this paper, we aim to address this discrepancy by laying out a framework for Autonomous Reinforcement Learning (ARL): reinforcement learning where the agent not only learns through its own experience, but also contends with lack of human supervision to reset between trials.

Benchmarking reinforcement-learning +1

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

no code implementations ICLR 2022 Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

In this paper, we discuss how the implicit regularization effect of SGD seen in supervised learning could in fact be harmful in the offline deep RL setting, leading to poor generalization and degenerate feature representations.

Atari Games D4RL +3

Extending the WILDS Benchmark for Unsupervised Adaptation

1 code implementation ICLR 2022 Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang

Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well.

CoMPS: Continual Meta Policy Search

no code implementations ICLR 2022 Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine

Beyond simply transferring past experience to new tasks, our goal is to devise continual reinforcement learning algorithms that learn to learn, using their experience on previous tasks to learn new tasks more quickly.

Continual Learning Continuous Control +5

Information is Power: Intrinsic Control via Information Capture

no code implementations NeurIPS 2021 Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D. Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine

We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.

Bayesian Adaptation for Covariate Shift

no code implementations NeurIPS 2021 Aurick Zhou, Sergey Levine

When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates. While improving the robustness of neural networks is one promising approach to mitigate this issue, an appealing alternate to robustifying networks against all possible test-time shifts is to instead directly adapt them to unlabeled inputs from the particular distribution shift we encounter at test time. However, this poses a challenging question: in the standard Bayesian model for supervised learning, unlabeled inputs are conditionally independent of model parameters when the labels are unobserved, so what can unlabeled data tell us about the model parameters at test-time?

Domain Adaptation Image Classification

TRAIL: Near-Optimal Imitation Learning with Suboptimal Data

1 code implementation ICLR 2022 Mengjiao Yang, Sergey Levine, Ofir Nachum

In this work, we answer this question affirmatively and present training objectives that use offline datasets to learn a factored transition model whose structure enables the extraction of a latent action space.

Imitation Learning

Understanding the World Through Action

1 code implementation24 Oct 2021 Sergey Levine

The recent history of machine learning research has taught us that machine learning methods can be most effective when they are provided with very large, high-capacity models, and trained on very large and diverse datasets.

reinforcement-learning Reinforcement Learning (RL)

C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks

no code implementations ICLR 2022 Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez

Goal-conditioned reinforcement learning (RL) can solve tasks in a wide range of domains, including navigation and manipulation, but learning to reach distant goals remains a central challenge to the field.

Reinforcement Learning (RL)

Data-Driven Offline Optimization For Architecting Hardware Accelerators

1 code implementation ICLR 2022 Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

An alternative paradigm is to use a "data-driven", offline approach that utilizes logged simulation data, to architect hardware accelerators, without needing any form of simulations.

Computer Architecture and Systems

MEMO: Test Time Robustness via Adaptation and Augmentation

2 code implementations18 Oct 2021 Marvin Zhang, Sergey Levine, Chelsea Finn

We study the problem of test time robustification, i. e., using the test input to improve model robustness.

Test-time Adaptation

Offline Reinforcement Learning with Implicit Q-Learning

14 code implementations12 Oct 2021 Ilya Kostrikov, Ashvin Nair, Sergey Levine

The main insight in our work is that, instead of evaluating unseen actions from the latest policy, we can approximate the policy improvement step implicitly by treating the state value function as a random variable, with randomness determined by the action (while still integrating over the dynamics to avoid excessive optimism), and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state.

D4RL Offline RL +3

Mismatched No More: Joint Model-Policy Optimization for Model-Based RL

1 code implementation6 Oct 2021 Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov

Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning.

Model-based Reinforcement Learning Reinforcement Learning (RL)

The Information Geometry of Unsupervised Reinforcement Learning

1 code implementation ICLR 2022 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

In this work, we show that unsupervised skill discovery algorithms based on mutual information maximization do not learn skills that are optimal for every possible reward function.

Contrastive Learning reinforcement-learning +3

Test Time Robustification of Deep Models via Adaptation and Augmentation

no code implementations29 Sep 2021 Marvin Mengxin Zhang, Sergey Levine, Chelsea Finn

We study the problem of test time robustification, i. e., using the test input to improve model robustness.

Test-time Adaptation

Data Sharing without Rewards in Multi-Task Offline Reinforcement Learning

no code implementations29 Sep 2021 Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Chelsea Finn, Sergey Levine, Karol Hausman

However, these benefits come at a cost -- for data to be shared between tasks, each transition must be annotated with reward labels corresponding to other tasks.

Multi-Task Learning Offline RL +2

FitVid: High-Capacity Pixel-Level Video Prediction

no code implementations29 Sep 2021 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Suraj Nair, Sergey Levine, Chelsea Finn, Dumitru Erhan

Furthermore, such an agent can internally represent the complex dynamics of the real-world and therefore can acquire a representation useful for a variety of visual perception tasks.

Image Augmentation Video Prediction +1

The Essential Elements of Offline RL via Supervised Learning

no code implementations ICLR 2022 Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine

These methods, which we collectively refer to as reinforcement learning via supervised learning (RvS), involve a number of design decisions, such as policy architectures and how the conditioning variable is constructed.

Offline RL reinforcement-learning +1

Should I Run Offline Reinforcement Learning or Behavioral Cloning?

no code implementations ICLR 2022 Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine

In this paper, our goal is to characterize environments and dataset compositions where offline RL leads to better performance than BC.

Atari Games Offline RL +3

Offline Reinforcement Learning with In-sample Q-Learning

1 code implementation ICLR 2022 Ilya Kostrikov, Ashvin Nair, Sergey Levine

The main insight in our work is that, instead of evaluating unseen actions from the latest policy, we can approximate the policy improvement step implicitly by treating the state value function as a random variable, with randomness determined by the action (while still integrating over the dynamics to avoid excessive optimism), and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state.

D4RL Offline RL +3

Training on Test Data with Bayesian Adaptation for Covariate Shift

no code implementations27 Sep 2021 Aurick Zhou, Sergey Levine

When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.

Domain Adaptation Image Classification

A Workflow for Offline Model-Free Robotic Reinforcement Learning

1 code implementation22 Sep 2021 Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine

To this end, we devise a set of metrics and conditions that can be tracked over the course of offline training, and can inform the practitioner about how the algorithm and model architecture should be adjusted to improve final performance.

Offline RL reinforcement-learning +1

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning

no code implementations NeurIPS 2021 Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn

We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation.

Offline RL reinforcement-learning +1

Robust Predictable Control

1 code implementation NeurIPS 2021 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Many of the challenges facing today's reinforcement learning (RL) algorithms, such as robustness, generalization, transfer, and computational efficiency are closely related to compression.

Computational Efficiency Decision Making +1

Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation

no code implementations28 Jul 2021 Charles Sun, Jędrzej Orbik, Coline Devin, Brian Yang, Abhishek Gupta, Glen Berseth, Sergey Levine

Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions.

Continual Learning Navigate +2

MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning

no code implementations15 Jul 2021 Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr Pong, Aurick Zhou, Justin Yu, Sergey Levine

In this work, we show that an uncertainty aware classifier can solve challenging reinforcement learning problems by both encouraging exploration and provided directed guidance towards positive outcomes.

Meta-Learning reinforcement-learning +1

Conservative Objective Models for Effective Offline Model-Based Optimization

2 code implementations14 Jul 2021 Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine

Computational design problems arise in a number of settings, from synthetic biology to computer architectures.

Offline Meta-Reinforcement Learning with Online Self-Supervision

1 code implementation8 Jul 2021 Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine

If we can meta-train on offline data, then we can reuse the same static dataset, labeled once with rewards for different tasks, to meta-train policies that adapt to a variety of new tasks at meta-test time.

Meta Reinforcement Learning Offline RL +2

Pragmatic Image Compression for Human-in-the-Loop Decision-Making

1 code implementation NeurIPS 2021 Siddharth Reddy, Anca D. Dragan, Sergey Levine

Standard lossy image compression algorithms aim to preserve an image's appearance, while minimizing the number of bits needed to transmit it.

Car Racing Decision Making +1

Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment

no code implementations ICLR Workshop Learning_to_Learn 2021 Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths

Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.

Decision Making Policy Gradient Methods +2

Model-Based Reinforcement Learning via Latent-Space Collocation

1 code implementation24 Jun 2021 Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine

The resulting latent collocation method (LatCo) optimizes trajectories of latent states, which improves over previously proposed shooting methods for visual model-based RL on tasks with sparse rewards and long-term goals.

Model-based Reinforcement Learning reinforcement-learning +1

FitVid: Overfitting in Pixel-Level Video Prediction

1 code implementation24 Jun 2021 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Suraj Nair, Sergey Levine, Chelsea Finn, Dumitru Erhan

There is a growing body of evidence that underfitting on the training data is one of the primary causes for the low quality predictions.

Image Augmentation Video Generation +1

Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots

no code implementations24 Jun 2021 Katie Kang, Gregory Kahn, Sergey Levine

In this work, we propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt).

Navigate reinforcement-learning +1

Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments

no code implementations ICML Workshop URL 2021 Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine

We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.

Reinforcement Learning as One Big Sequence Modeling Problem

1 code implementation ICML Workshop URL 2021 Michael Janner, Qiyang Li, Sergey Levine

However, we can also view RL as a sequence modeling problem, with the goal being to predict a sequence of actions that leads to a sequence of high rewards.

Imitation Learning Offline RL +2

Offline Reinforcement Learning as One Big Sequence Modeling Problem

2 code implementations NeurIPS 2021 Michael Janner, Qiyang Li, Sergey Levine

Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time.

Imitation Learning Offline RL +2

Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning

no code implementations2 Jun 2021 Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu

Learning to reach goal states and learning diverse skills through mutual information (MI) maximization have been proposed as principled frameworks for self-supervised reinforcement learning, allowing agents to acquire broadly applicable multitask policies with minimal reward engineering.

reinforcement-learning Reinforcement Learning (RL) +1

What Can I Do Here? Learning New Skills by Imagining Visual Affordances

2 code implementations1 Jun 2021 Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine

In effect, prior data is used to learn what kinds of outcomes may be possible, such that when the robot encounters an unfamiliar setting, it can sample potential outcomes from its model, attempt to reach them, and thereby update both its skills and its outcome model.

Zero-shot Generalization

DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies

no code implementations23 Apr 2021 Soroush Nasiriany, Vitchyr H. Pong, Ashvin Nair, Alexander Khazatsky, Glen Berseth, Sergey Levine

Contextual policies provide this capability in principle, but the representation of the context determines the degree of generalization and expressivity.

reinforcement-learning Reinforcement Learning (RL) +1

Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models

1 code implementation21 Apr 2021 Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine

Humans have a remarkable ability to make decisions by accurately reasoning about future events, including the future behaviors and states of mind of other agents.

Outcome-Driven Reinforcement Learning via Variational Inference

no code implementations NeurIPS 2021 Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.

reinforcement-learning Reinforcement Learning (RL) +1

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

no code implementations16 Apr 2021 Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman

In this paper, we study how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously, sharing exploration, experience, and representations across tasks.

reinforcement-learning Reinforcement Learning (RL)

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

no code implementations15 Apr 2021 Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine

We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data.

Q-Learning reinforcement-learning +1

Rapid Exploration for Open-World Navigation with Latent Goal Models

no code implementations12 Apr 2021 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.

Autonomous Navigation

AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control

3 code implementations5 Apr 2021 Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa

Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips.

Imitation Learning Reinforcement Learning (RL)

Benchmarks for Deep Off-Policy Evaluation

3 code implementations ICLR 2021 Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Tom Le Paine

Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making.

Benchmarking Continuous Control +3

Accelerating Online Reinforcement Learning via Model-Based Meta-Learning

no code implementations ICLR Workshop Learning_to_Learn 2021 John D Co-Reyes, Sarah Feng, Glen Berseth, Jie Qui, Sergey Levine

Current reinforcement learning algorithms struggle to quickly adapt to new situations without large amounts of experience and usually without large amounts of optimization over that experience.

Meta-Learning reinforcement-learning +1

Maximum Entropy RL (Provably) Solves Some Robust RL Problems

no code implementations ICLR 2022 Benjamin Eysenbach, Sergey Levine

Many potential applications of reinforcement learning (RL) require guarantees that the agent will perform well in the face of disturbances to the dynamics or reward function.

Reinforcement Learning (RL)

COMBO: Conservative Offline Model-Based Policy Optimization

4 code implementations NeurIPS 2021 Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn

We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model.

Offline RL Uncertainty Quantification

Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation

no code implementations ICLR 2021 Justin Fu, Sergey Levine

We propose to tackle this problem by leveraging the normalized maximum-likelihood (NML) estimator, which provides a principled approach to handling uncertainty and out-of-distribution inputs.

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

no code implementations4 Feb 2021 Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains.

reinforcement-learning Reinforcement Learning (RL)

Evolving Reinforcement Learning Algorithms

5 code implementations ICLR 2021 John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust

Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm.

Atari Games Meta-Learning +2

On Trade-offs of Image Prediction in Visual Model-Based Reinforcement Learning

no code implementations1 Jan 2021 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Danijar Hafner, Dumitru Erhan, Harini Kannan, Chelsea Finn, Sergey Levine

In this paper, we study a number of design decisions for the predictive model in visual MBRL algorithms, focusing specifically on methods that use a predictive model for planning.

Model-based Reinforcement Learning reinforcement-learning +1

Variable-Shot Adaptation for Incremental Meta-Learning

no code implementations1 Jan 2021 Tianhe Yu, Xinyang Geng, Chelsea Finn, Sergey Levine

Few-shot meta-learning methods consider the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.

Meta-Learning Zero-Shot Learning

Invariant Representations for Reinforcement Learning without Reconstruction

no code implementations ICLR 2021 Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

Causal Inference reinforcement-learning +2

Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples

no code implementations1 Jan 2021 Kevin Li, Abhishek Gupta, Vitchyr H. Pong, Ashwin Reddy, Aurick Zhou, Justin Yu, Sergey Levine

In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states.

reinforcement-learning Reinforcement Learning (RL)

Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments

no code implementations ICLR 2021 Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Charles Blundell, Sergey Levine, Yoshua Bengio, Michael Curtis Mozer

To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e. g., health, position).

Object

Model-Based Visual Planning with Self-Supervised Functional Distances

1 code implementation ICLR 2021 Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine

In our experiments, we find that our method can successfully learn models that perform a variety of tasks at test-time, moving objects amid distractors with a simulated robotic arm and even learning to open and close a drawer using a real-world robot.

reinforcement-learning Reinforcement Learning (RL)

ViNG: Learning Open-World Navigation with Visual Goals

no code implementations17 Dec 2020 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform.

Navigate reinforcement-learning +1

Variable-Shot Adaptation for Online Meta-Learning

no code implementations14 Dec 2020 Tianhe Yu, Xinyang Geng, Chelsea Finn, Sergey Levine

Few-shot meta-learning methods consider the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.

Meta-Learning Zero-Shot Learning

Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual Model-Based Reinforcement Learning

1 code implementation8 Dec 2020 Mohammad Babaeizadeh, Mohammad Taghi Saffar, Danijar Hafner, Harini Kannan, Chelsea Finn, Sergey Levine, Dumitru Erhan

In this paper, we study a number of design decisions for the predictive model in visual MBRL algorithms, focusing specifically on methods that use a predictive model for planning.

Model-based Reinforcement Learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.