Search Results for author: Pieter Abbeel

Found 244 papers, 125 papers with code

Hierarchically Decoupled Morphological Transfer

no code implementations ICML 2020 Donald Hejna, Lerrel Pinto, Pieter Abbeel

Learning long-range behaviors on complex high-dimensional agents is a fundamental problem in robot learning.

CURL: Contrastive Unsupervised Representation Learning for Reinforcement Learning

1 code implementation ICML 2020 Michael Laskin, Pieter Abbeel, Aravind Srinivas

CURL extracts high level features from raw pixels using a contrastive learning objective and performs off-policy control on top of the extracted features.

Contrastive Learning Unsupervised Representation Learning

Responsive Safety in Reinforcement Learning

no code implementations ICML 2020 Adam Stooke, Joshua Achiam, Pieter Abbeel

This intuition leads to our introduction of PID control for the Lagrange multiplier in constrained RL, which we cast as a dynamical system.

Safe Reinforcement Learning

Hindsight Task Relabelling: Experience Replay for Sparse Reward Meta-RL

no code implementations NeurIPS 2021 Charles Packer, Pieter Abbeel, Joseph E. Gonzalez

Meta-reinforcement learning (meta-RL) has proven to be a successful framework for leveraging experience from prior tasks to rapidly learn new related tasks, however, current meta-RL approaches struggle to learn in sparse reward environments.

Meta Reinforcement Learning

Teachable Reinforcement Learning via Advice Distillation

no code implementations NeurIPS 2021 Olivia Watkins, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, Jacob Andreas

Training automated agents to perform complex behaviors in interactive environments is challenging: reinforcement learning requires careful hand-engineering of reward functions, imitation learning requires specialized infrastructure and access to a human expert, and learning from intermediate forms of supervision (like binary preferences) is time-consuming and provides minimal information per human intervention.

Decision Making Imitation Learning

Count-Based Temperature Scheduling for Maximum Entropy Reinforcement Learning

no code implementations28 Nov 2021 Dailin Hu, Pieter Abbeel, Roy Fox

Maximum Entropy Reinforcement Learning (MaxEnt RL) algorithms such as Soft Q-Learning (SQL) and Soft Actor-Critic trade off reward and policy entropy, which has the potential to improve training stability and robustness.

Q-Learning

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning

no code implementations4 Nov 2021 Wenlong Huang, Igor Mordatch, Pieter Abbeel, Deepak Pathak

We show that a single generalist policy can perform in-hand manipulation of over 100 geometrically-diverse real-world objects and generalize to new objects with unseen shape or size.

Multi-Task Learning

B-Pref: Benchmarking Preference-Based Reinforcement Learning

1 code implementation4 Nov 2021 Kimin Lee, Laura Smith, Anca Dragan, Pieter Abbeel

However, it is difficult to quantify the progress in preference-based RL due to the lack of a commonly adopted benchmark.

Mastering Atari Games with Limited Data

1 code implementation NeurIPS 2021 Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, Yang Gao

Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

Atari Games

URLB: Unsupervised Reinforcement Learning Benchmark

1 code implementation28 Oct 2021 Michael Laskin, Denis Yarats, Hao liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel

Deep Reinforcement Learning (RL) has emerged as a powerful paradigm to solve a range of complex yet specific control tasks.

Continuous Control Unsupervised Reinforcement Learning

Temporal-Difference Value Estimation via Uncertainty-Guided Soft Updates

no code implementations28 Oct 2021 Litian Liang, Yaosheng Xu, Stephen Mcaleer, Dailin Hu, Alexander Ihler, Pieter Abbeel, Roy Fox

Under the belief that $\beta$ is closely related to the (state dependent) model uncertainty, Entropy Regularized Q-Learning (EQL) further introduces a principled scheduling of $\beta$ by maintaining a collection of the model parameters that characterizes model uncertainty.

Q-Learning

Towards More Generalizable One-shot Visual Imitation Learning

no code implementations26 Oct 2021 Zhao Mandi, Fangchen Liu, Kimin Lee, Pieter Abbeel

We then study the multi-task setting, where multi-task training is followed by (i) one-shot imitation on variations within the training tasks, (ii) one-shot imitation on new tasks, and (iii) fine-tuning on new tasks.

Contrastive Learning Fine-tuning +2

Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback

no code implementations11 Aug 2021 Xiaofei Wang, Kimin Lee, Kourosh Hakhamaneshi, Pieter Abbeel, Michael Laskin

A promising approach to solving challenging long-horizon tasks has been to extract behavior priors (skills) by fitting generative models to large offline datasets of demonstrations.

Playful Interactions for Representation Learning

no code implementations19 Jul 2021 Sarah Young, Jyothish Pari, Pieter Abbeel, Lerrel Pinto

In this work, we propose to use playful interactions in a self-supervised manner to learn visual representations for downstream tasks.

Imitation Learning Representation Learning

Hierarchical Few-Shot Imitation with Skill Transition Models

1 code implementation ICML Workshop URL 2021 Kourosh Hakhamaneshi, Ruihan Zhao, Albert Zhan, Pieter Abbeel, Michael Laskin

To this end, we present Few-shot Imitation with Skill Transition Models (FIST), an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks given a few downstream demonstrations.

The MineRL BASALT Competition on Learning from Human Feedback

no code implementations5 Jul 2021 Rohin Shah, Cody Wild, Steven H. Wang, Neel Alex, Brandon Houghton, William Guss, Sharada Mohanty, Anssi Kanervisto, Stephanie Milani, Nicholay Topin, Pieter Abbeel, Stuart Russell, Anca Dragan

Rather than training AI systems using a predefined reward function or using a labeled dataset with a predefined set of categories, we instead train the AI system using a learning signal derived from some form of human feedback, which can evolve over time as the understanding of the task changes, or as the capabilities of the AI system improve.

Imitation Learning Minecraft

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

1 code implementation1 Jul 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.

Fine-tuning Offline RL

Scenic4RL: Programmatic Modeling and Generation of Reinforcement Learning Environments

no code implementations18 Jun 2021 Abdus Salam Azad, Edward Kim, Qiancheng Wu, Kimin Lee, Ion Stoica, Pieter Abbeel, Sanjit A. Seshia

Furthermore, in complex domains such as soccer, the space of possible scenarios is infinite, which makes it impossible for one research group to provide a comprehensive set of scenarios to train, test, and benchmark RL algorithms.

Unsupervised Learning of Visual 3D Keypoints for Control

no code implementations14 Jun 2021 Boyuan Chen, Pieter Abbeel, Deepak Pathak

Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control.

Data-Efficient Exploration with Self Play for Atari

no code implementations ICML Workshop URL 2021 Michael Laskin, Catherine Cang, Ryan Rudes, Pieter Abbeel

To alleviate the reliance on reward engineering it is important to develop RL algorithms capable of efficiently acquiring skills with no rewards extrinsic to the agent.

Efficient Exploration

PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training

1 code implementation9 Jun 2021 Kimin Lee, Laura Smith, Pieter Abbeel

We also show that our method is able to utilize real-time human feedback to effectively prevent reward exploitation and learn new behaviors that are difficult to specify with standard reward functions.

Unsupervised Pre-training

JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data

no code implementations2 Jun 2021 Kourosh Hakhamaneshi, Pieter Abbeel, Vladimir Stojanovic, Aditya Grover

Such a decomposition can dynamically control the reliability of information derived from the online and offline data and the use of pretrained neural networks permits scalability to large offline datasets.

Gaussian Processes

VideoGPT: Video Generation using VQ-VAE and Transformers

1 code implementation20 Apr 2021 Wilson Yan, Yunzhi Zhang, Pieter Abbeel, Aravind Srinivas

We present VideoGPT: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos.

Video Generation

Auto-Tuned Sim-to-Real Transfer

1 code implementation15 Apr 2021 Yuqing Du, Olivia Watkins, Trevor Darrell, Pieter Abbeel, Deepak Pathak

Policies trained in simulation often fail when transferred to the real world due to the `reality gap' where the simulator is unable to accurately capture the dynamics and visual properties of the real world.

Learning What To Do by Simulating the Past

1 code implementation ICLR 2021 David Lindner, Rohin Shah, Pieter Abbeel, Anca Dragan

Since reward functions are hard to specify, recent work has focused on learning policies from human feedback.

GEM: Group Enhanced Model for Learning Dynamical Control Systems

no code implementations7 Apr 2021 Philippe Hansen-Estruch, Wenling Shang, Lerrel Pinto, Pieter Abbeel, Stas Tiomkin

In this work, we take advantage of these structures to build effective dynamical models that are amenable to sample-based learning.

Continuous Control Model-based Reinforcement Learning

AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control

no code implementations5 Apr 2021 Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa

Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips.

Imitation Learning

Mutual Information State Intrinsic Control

2 code implementations ICLR 2021 Rui Zhao, Yang Gao, Pieter Abbeel, Volker Tresp, Wei Xu

Reinforcement learning has been shown to be highly successful at many challenging tasks.

Pretrained Transformers as Universal Computation Engines

2 code implementations9 Mar 2021 Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch

We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks.

Task-Agnostic Morphology Evolution

1 code implementation ICLR 2021 Donald J. Hejna III, Pieter Abbeel, Lerrel Pinto

Deep reinforcement learning primarily focuses on learning behavior, usually overlooking the fact that an agent's function is largely determined by form.

MSA Transformer

1 code implementation13 Feb 2021 Roshan Rao, Jason Liu, Robert Verkuil, Joshua Meier, John F. Canny, Pieter Abbeel, Tom Sercu, Alexander Rives

Unsupervised protein language models trained across millions of diverse sequences learn structure and function of proteins.

Language Modelling Multiple Sequence Alignment

Bottleneck Transformers for Visual Recognition

11 code implementations CVPR 2021 Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, Ashish Vaswani

Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84. 7% top-1 accuracy on the ImageNet benchmark while being up to 1. 64x faster in compute time than the popular EfficientNet models on TPU-v3 hardware.

Image Classification Instance Segmentation +1

Discrete Predictive Representation for Long-horizon Planning

no code implementations1 Jan 2021 Thanard Kurutach, Julia Peng, Yang Gao, Stuart Russell, Pieter Abbeel

Discrete representations have been key in enabling robots to plan at more abstract levels and solve temporally-extended tasks more efficiently for decades.

Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets

no code implementations1 Jan 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.

Fine-tuning Offline RL +1

Unsupervised Active Pre-Training for Reinforcement Learning

no code implementations1 Jan 2021 Hao liu, Pieter Abbeel

On DMControl suite, APT beats all baselines in terms of asymptotic performance and data efficiency and dramatically improves performance on tasks that are extremely difficult for training from scratch.

Atari Games Contrastive Learning +1

Robust Imitation via Decision-Time Planning

no code implementations1 Jan 2021 Carl Qi, Pieter Abbeel, Aditya Grover

The goal of imitation learning is to mimic expert behavior from demonstrations, without access to an explicit reward signal.

Imitation Learning

R-LAtte: Attention Module for Visual Control via Reinforcement Learning

no code implementations1 Jan 2021 Mandi Zhao, Qiyang Li, Aravind Srinivas, Ignasi Clavera, Kimin Lee, Pieter Abbeel

Attention mechanisms are generic inductive biases that have played a critical role in improving the state-of-the-art in supervised learning, unsupervised pre-training and generative modeling for multiple domains including vision, language and speech.

Unsupervised Pre-training

Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates

no code implementations1 Jan 2021 Kimin Lee, Michael Laskin, Aravind Srinivas, Pieter Abbeel

Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with other benefits previously derived from ensembles: (a) Bootstrap; (b) UCB Exploration.

Q-Learning

Benefits of Assistance over Reward Learning

no code implementations1 Jan 2021 Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael D Dennis, Pieter Abbeel, Anca Dragan, Stuart Russell

By merging reward learning and control, assistive agents can reason about the impact of control actions on reward learning, leading to several advantages over agents based on reward learning.

Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay

no code implementations1 Jan 2021 Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel

In this paper, we present Latent Vector Experience Replay (LeVER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements without sacrificing the performance of RL agents.

Atari Games Transfer Learning

VideoGen: Generative Modeling of Videos using VQ-VAE and Transformers

no code implementations1 Jan 2021 Yunzhi Zhang, Wilson Yan, Pieter Abbeel, Aravind Srinivas

We present VideoGen: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos.

Video Generation

A Framework for Efficient Robotic Manipulation

no code implementations14 Dec 2020 Albert Zhan, Philip Zhao, Lerrel Pinto, Pieter Abbeel, Michael Laskin

Building on these advances, we present a Framework for Efficient Robotic Manipulation (FERM) that utilizes data augmentation and unsupervised learning to achieve extremely sample-efficient training of robotic manipulation policies with sparse rewards.

Data Augmentation Unsupervised Representation Learning

Parallel Training of Deep Networks with Local Updates

no code implementations7 Dec 2020 Michael Laskin, Luke Metz, Seth Nabarro, Mark Saroufim, Badreddine Noune, Carlo Luschi, Jascha Sohl-Dickstein, Pieter Abbeel

Deep learning models trained on large data sets have been widely successful in both vision and language domains.

Reset-Free Lifelong Learning with Skill-Space Planning

1 code implementation ICLR 2021 Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch

We propose Lifelong Skill Planning (LiSP), an algorithmic framework for non-episodic lifelong RL based on planning in an abstract space of higher-order skills.

LaND: Learning to Navigate from Disengagements

1 code implementation9 Oct 2020 Gregory Kahn, Pieter Abbeel, Sergey Levine

However, we believe that these disengagements not only show where the system fails, which is useful for troubleshooting, but also provide a direct learning signal by which the robot can learn to navigate.

Autonomous Navigation Imitation Learning

Decoupling Representation Learning from Reinforcement Learning

2 code implementations14 Sep 2020 Adam Stooke, Kimin Lee, Pieter Abbeel, Michael Laskin

In an effort to overcome limitations of reward-driven feature learning in deep reinforcement learning (RL) from images, we propose decoupling representation learning from policy learning.

Data Augmentation Representation Learning

Visual Imitation Made Easy

no code implementations11 Aug 2020 Sarah Young, Dhiraj Gandhi, Shubham Tulsiani, Abhinav Gupta, Pieter Abbeel, Lerrel Pinto

We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.

Imitation Learning Structure from Motion

Robust Reinforcement Learning using Adversarial Populations

1 code implementation4 Aug 2020 Eugene Vinitsky, Yuqing Du, Kanaad Parvate, Kathy Jang, Pieter Abbeel, Alexandre Bayen

Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed.

Dynamics Generalization via Information Bottleneck in Deep Reinforcement Learning

no code implementations3 Aug 2020 Xingyu Lu, Kimin Lee, Pieter Abbeel, Stas Tiomkin

Despite the significant progress of deep reinforcement learning (RL) in solving sequential decision making problems, RL agents often overfit to training environments and struggle to adapt to new, unseen environments.

Decision Making

Hybrid Discriminative-Generative Training via Contrastive Learning

1 code implementation17 Jul 2020 Hao Liu, Pieter Abbeel

In this paper we show that through the perspective of hybrid discriminative-generative training of energy-based models we can make a direct connection between contrastive learning and supervised learning.

Contrastive Learning Out-of-Distribution Detection

Efficient Empowerment Estimation for Unsupervised Stabilization

no code implementations ICLR 2021 Ruihan Zhao, Kevin Lu, Pieter Abbeel, Stas Tiomkin

We demonstrate our solution for sample-based unsupervised stabilization on different dynamical control systems and show the advantages of our method by comparing it to the existing VLB approaches.

Variable Skipping for Autoregressive Range Density Estimation

1 code implementation ICML 2020 Eric Liang, Zongheng Yang, Ion Stoica, Pieter Abbeel, Yan Duan, Xi Chen

In this paper, we explore a technique, variable skipping, for accelerating range density estimation over deep autoregressive models.

Data Augmentation Density Estimation

Responsive Safety in Reinforcement Learning by PID Lagrangian Methods

no code implementations8 Jul 2020 Adam Stooke, Joshua Achiam, Pieter Abbeel

Lagrangian methods are widely used algorithms for constrained optimization problems, but their learning dynamics exhibit oscillations and overshoot which, when applied to safe reinforcement learning, leads to constraint-violating behavior during agent training.

Safe Reinforcement Learning

Self-Supervised Policy Adaptation during Deployment

4 code implementations ICLR 2021 Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A. Efros, Lerrel Pinto, Xiaolong Wang

A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal.

AvE: Assistance via Empowerment

1 code implementation NeurIPS 2020 Yuqing Du, Stas Tiomkin, Emre Kiciman, Daniel Polani, Pieter Abbeel, Anca Dragan

One difficulty in using artificial agents for human-assistive applications lies in the challenge of accurately assisting with a person's goal(s).

Locally Masked Convolution for Autoregressive Models

1 code implementation22 Jun 2020 Ajay Jain, Pieter Abbeel, Deepak Pathak

For tasks such as image completion, these models are unable to use much of the observed context.

Anomaly Detection Density Estimation +2

Denoising Diffusion Probabilistic Models

21 code implementations NeurIPS 2020 Jonathan Ho, Ajay Jain, Pieter Abbeel

We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics.

Denoising Image Generation +1

Automatic Curriculum Learning through Value Disagreement

1 code implementation NeurIPS 2020 Yunzhi Zhang, Pieter Abbeel, Lerrel Pinto

Our key insight is that if we can sample goals at the frontier of the set of goals that an agent is able to reach, it will provide a significantly stronger learning signal compared to randomly sampled goals.

Curriculum Learning

Mutual Information Maximization for Robust Plannable Representations

no code implementations16 May 2020 Yiming Ding, Ignasi Clavera, Pieter Abbeel

The later, while they present low sample complexity, they learn latent spaces that need to reconstruct every single detail of the scene.

Model-based Reinforcement Learning

Model-Augmented Actor-Critic: Backpropagating through Paths

no code implementations ICLR 2020 Ignasi Clavera, Violet Fu, Pieter Abbeel

Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning.

Model-based Reinforcement Learning

Planning to Explore via Self-Supervised World Models

3 code implementations12 May 2020 Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, Deepak Pathak

Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge.

Plan2Vec: Unsupervised Representation Learning by Latent Plans

1 code implementation7 May 2020 Ge Yang, Amy Zhang, Ari S. Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra

In this paper we introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.

Motion Planning Unsupervised Representation Learning

Reinforcement Learning with Augmented Data

1 code implementation NeurIPS 2020 Michael Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas

To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.

Data Augmentation OpenAI Gym

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

3 code implementations8 Apr 2020 Aravind Srinivas, Michael Laskin, Pieter Abbeel

On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.

Atari Games Continuous Control +1

Sparse Graphical Memory for Robust Planning

1 code implementation NeurIPS 2020 Scott Emmons, Ajay Jain, Michael Laskin, Thanard Kurutach, Pieter Abbeel, Deepak Pathak

To operate effectively in the real world, agents should be able to act from high-dimensional raw sensory input such as images and achieve diverse goals across long time-horizons.

Imitation Learning Visual Navigation

Learning Predictive Representations for Deformable Objects Using Contrastive Estimation

1 code implementation11 Mar 2020 Wilson Yan, Ashwin Vangipuram, Pieter Abbeel, Lerrel Pinto

Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models.

Deformable Object Manipulation

Hierarchically Decoupled Imitation for Morphological Transfer

1 code implementation3 Mar 2020 Donald J. Hejna III, Pieter Abbeel, Lerrel Pinto

Learning long-range behaviors on complex high-dimensional agents is a fundamental problem in robot learning.

Hallucinative Topological Memory for Zero-Shot Visual Planning

1 code implementation ICML 2020 Kara Liu, Thanard Kurutach, Christine Tung, Pieter Abbeel, Aviv Tamar

In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e. g., images obtained from self-supervised robot interaction.

Generalized Hindsight for Reinforcement Learning

no code implementations NeurIPS 2020 Alexander C. Li, Lerrel Pinto, Pieter Abbeel

Compared to standard relabeling techniques, Generalized Hindsight provides a substantially more efficient reuse of samples, which we empirically demonstrate on a suite of multi-task navigation and manipulation tasks.

BADGR: An Autonomous Self-Supervised Learning-Based Navigation System

1 code implementation13 Feb 2020 Gregory Kahn, Pieter Abbeel, Sergey Levine

Mobile robot navigation is typically regarded as a geometric problem, in which the robot's objective is to perceive the geometry of the environment in order to plan collision-free paths towards a desired goal.

Robot Navigation Self-Supervised Learning

Mutual Information-based State-Control for Intrinsically Motivated Reinforcement Learning

no code implementations5 Feb 2020 Rui Zhao, Yang Gao, Pieter Abbeel, Volker Tresp, Wei Xu

In reinforcement learning, an agent learns to reach a set of goals by means of an external reward signal.

Preventing Imitation Learning with Adversarial Policy Ensembles

no code implementations31 Jan 2020 Albert Zhan, Stas Tiomkin, Pieter Abbeel

To our knowledge, this is the first work regarding the protection of policies in Reinforcement Learning.

Imitation Learning

Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards

no code implementations21 Dec 2019 Xingyu Lu, Stas Tiomkin, Pieter Abbeel

While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge.

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

no code implementations10 Dec 2019 Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine

In this paper, we study how these challenges can be alleviated with an automated robotic learning framework, in which multi-stage tasks are defined simply by providing videos of a human demonstrator and then learned autonomously by the robot from raw image observations.

Translation

Learning Efficient Representation for Intrinsic Motivation

no code implementations4 Dec 2019 Ruihan Zhao, Stas Tiomkin, Pieter Abbeel

The core idea is to represent the relation between action sequences and future states using a stochastic dynamic model in latent space with a specific form.

Adaptive Online Planning for Continual Lifelong Learning

1 code implementation3 Dec 2019 Kevin Lu, Igor Mordatch, Pieter Abbeel

We study learning control in an online reset-free lifelong learning scenario, where mistakes can compound catastrophically into the future and the underlying dynamics of the environment may change.

Compositional Plan Vectors

1 code implementation NeurIPS 2019 Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine

We show that CPVs can be learned within a one-shot imitation learning framework without any additional supervision or information about task hierarchy, and enable a demonstration-conditioned policy to generalize to tasks that sequence twice as many skills as the tasks seen during training.

Imitation Learning

Natural Image Manipulation for Autoregressive Models Using Fisher Scores

no code implementations25 Nov 2019 Wilson Yan, Jonathan Ho, Pieter Abbeel

Deep autoregressive models are one of the most powerful models that exist today which achieve state-of-the-art bits per dim.

Image Manipulation Latent Variable Models

Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control

no code implementations30 Oct 2019 Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine

We show that CPVs can be learned within a one-shot imitation learning framework without any additional supervision or information about task hierarchy, and enable a demonstration-conditioned policy to generalize to tasks that sequence twice as many skills as the tasks seen during training.

Imitation Learning

Learning to Manipulate Deformable Objects without Demonstrations

1 code implementation29 Oct 2019 Yilin Wu, Wilson Yan, Thanard Kurutach, Lerrel Pinto, Pieter Abbeel

Second, instead of jointly learning both the pick and the place locations, we only explicitly learn the placing policy conditioned on random pick points.

Deformable Object Manipulation

Asynchronous Methods for Model-Based Reinforcement Learning

1 code implementation28 Oct 2019 Yunzhi Zhang, Ignasi Clavera, Boren Tsai, Pieter Abbeel

In this work, we propose an asynchronous framework for model-based reinforcement learning methods that brings down the run time of these algorithms to be just the data collection time.

Model-based Reinforcement Learning

Geometry-Aware Neural Rendering

1 code implementation NeurIPS 2019 Josh Tobin, OpenAI Robotics, Pieter Abbeel

Understanding the 3-dimensional structure of the world is a core challenge in computer vision and robotics.

Neural Rendering

On the Utility of Learning about Humans for Human-AI Coordination

1 code implementation NeurIPS 2019 Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca Dragan

While we would like agents that can coordinate with humans, current algorithms such as self-play and population-based training create agents that can coordinate with themselves.

Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization

2 code implementations7 Oct 2019 Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Kurt Keutzer, Ion Stoica, Joseph E. Gonzalez

We formalize the problem of trading-off DNN training time and memory requirements as the tensor rematerialization optimization problem, a generalization of prior checkpointing strategies.

PatchFormer: A neural architecture for self-supervised representation learning on images

no code implementations25 Sep 2019 Aravind Srinivas, Pieter Abbeel

In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image.

Fine-tuning Representation Learning +1

Dynamical System Embedding for Efficient Intrinsically Motivated Artificial Agents

no code implementations25 Sep 2019 Ruihan Zhao, Stas Tiomkin, Pieter Abbeel

In this work, we develop a novel approach for the estimation of empowerment in unknown arbitrary dynamics from visual stimulus only, without sampling for the estimation of MIAS.

rlpyt: A Research Code Base for Deep Reinforcement Learning in PyTorch

7 code implementations3 Sep 2019 Adam Stooke, Pieter Abbeel

rlpyt is designed as a high-throughput code base for small- to medium-scale research in deep RL.

Q-Learning

DoorGym: A Scalable Door Opening Environment And Baseline Agent

1 code implementation5 Aug 2019 Yusuke Urakami, Alec Hodgkinson, Casey Carlin, Randall Leu, Luca Rigazio, Pieter Abbeel

We introduce DoorGym, an open-source door opening simulation framework designed to utilize domain randomization to train a stable policy.

Likelihood Contribution based Multi-scale Architecture for Generative Flows

no code implementations5 Aug 2019 Hari Prasanna Das, Pieter Abbeel, Costas J. Spanos

An effective solution to the above challenge as proposed by Dinh et al. (2016) is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals.

Dimensionality Reduction

BagNet: Berkeley Analog Generator with Layout Optimizer Boosted with Deep Neural Networks

no code implementations23 Jul 2019 Kourosh Hakhamaneshi, Nick Werblun, Pieter Abbeel, Vladimir Stojanovic

The discrepancy between post-layout and schematic simulation results continues to widen in analog design due in part to the domination of layout parasitics.

Benchmarking Model-Based Reinforcement Learning

2 code implementations3 Jul 2019 Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel, Jimmy Ba

Model-based reinforcement learning (MBRL) is widely seen as having the potential to be significantly more sample efficient than model-free RL.

Model-based Reinforcement Learning

On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference

no code implementations23 Jun 2019 Rohin Shah, Noah Gundotra, Pieter Abbeel, Anca D. Dragan

But in the era of deep learning, a natural suggestion researchers make is to avoid mathematical models of human behavior that are fraught with specific assumptions, and instead use a purely data-driven approach.

Evaluating Protein Transfer Learning with TAPE

4 code implementations NeurIPS 2019 Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Xi Chen, John Canny, Pieter Abbeel, Yun S. Song

Semi-supervised learning has emerged as an important paradigm in protein modeling due to the high cost of acquiring supervised protein labels, but the current literature is fragmented when it comes to datasets and standardized evaluation techniques.

Representation Learning Transfer Learning

Goal-conditioned Imitation Learning

1 code implementation NeurIPS 2019 Yiming Ding, Carlos Florensa, Mariano Phielipp, Pieter Abbeel

Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute.

Imitation Learning

Learning latent state representation for speeding up exploration

no code implementations27 May 2019 Giulia Vezzani, Abhishek Gupta, Lorenzo Natale, Pieter Abbeel

In this work, we take a representation learning viewpoint on exploration, utilizing prior experience to learn effective latent representations, which can subsequently indicate which regions to explore.

Representation Learning

MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies

1 code implementation NeurIPS 2019 Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, Sergey Levine

In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors.

Continuous Control

Compression with Flows via Local Bits-Back Coding

1 code implementation NeurIPS 2019 Jonathan Ho, Evan Lohn, Pieter Abbeel

Likelihood-based generative models are the backbones of lossless compression due to the guaranteed existence of codes with lengths close to negative log likelihood.

Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables

1 code implementation16 May 2019 Friso H. Kingma, Pieter Abbeel, Jonathan Ho

The bits-back argument suggests that latent variable models can be turned into lossless compression schemes.

Latent Variable Models

Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules

3 code implementations14 May 2019 Daniel Ho, Eric Liang, Ion Stoica, Pieter Abbeel, Xi Chen

A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations.

Image Augmentation

Learning Robotic Manipulation through Visual Planning and Acting

no code implementations11 May 2019 Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar

We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.

Visual Tracking

Deep Unsupervised Cardinality Estimation

1 code implementation10 May 2019 Zongheng Yang, Eric Liang, Amog Kamsetty, Chenggang Wu, Yan Duan, Xi Chen, Pieter Abbeel, Joseph M. Hellerstein, Sanjay Krishnan, Ion Stoica

To produce a truly usable estimator, we develop a Monte Carlo integration scheme on top of autoregressive models that can efficiently handle range queries with dozens of dimensions or more.

Density Estimation

Guided Meta-Policy Search

no code implementations NeurIPS 2019 Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn

Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples since they learn from scratch.

Continuous Control Imitation Learning +2

Towards Characterizing Divergence in Deep Q-Learning

no code implementations21 Mar 2019 Joshua Achiam, Ethan Knight, Pieter Abbeel

Deep Q-Learning (DQL), a family of temporal difference algorithms for control, employs three techniques collectively known as the `deadly triad' in reinforcement learning: bootstrapping, off-policy learning, and function approximation.

Continuous Control OpenAI Gym +1

Domain Randomization for Active Pose Estimation

no code implementations10 Mar 2019 Xinyi Ren, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Abhishek Gupta, Aviv Tamar, Pieter Abbeel

In this work, we investigate how to improve the accuracy of domain randomization based pose estimation.

Pose Estimation

Preferences Implicit in the State of the World

1 code implementation ICLR 2019 Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, Anca Dragan

We find that information from the initial state can be used to infer both side effects that should be avoided as well as preferences for how the environment should be organized.

Addressing Sample Complexity in Visual Tasks Using HER and Hallucinatory GANs

2 code implementations NeurIPS 2019 Himanshu Sahni, Toby Buckley, Pieter Abbeel, Ilya Kuzovkin

In this work, we show how visual trajectories can be hallucinated to appear successful by altering agent observations using a generative model trained on relatively few snapshots of the goal.

The Importance of Sampling inMeta-Reinforcement Learning

no code implementations NeurIPS 2018 Bradly Stadie, Ge Yang, Rein Houthooft, Peter Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever

Results are presented on a new environment we call `Krazy World': a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning.

Meta Reinforcement Learning

Guiding Policies with Language via Meta-Learning

2 code implementations ICLR 2019 John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine

However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task.

Imitation Learning Meta-Learning

An Algorithmic Perspective on Imitation Learning

no code implementations16 Nov 2018 Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, Jan Peters

This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning.

Imitation Learning Learning Theory

One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks

no code implementations25 Oct 2018 Tianhe Yu, Pieter Abbeel, Sergey Levine, Chelsea Finn

We consider the problem of learning multi-stage vision-based tasks on a real robot from a single video of a human performing the task, while leveraging demonstration data of subtasks with other objects.

Imitation Learning

Establishing Appropriate Trust via Critical States

no code implementations18 Oct 2018 Sandy H. Huang, Kush Bhatia, Pieter Abbeel, Anca D. Dragan

In order to effectively interact with or supervise a robot, humans need to have an accurate mental model of its capabilities and how it acts.

Robotics

Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation

1 code implementation16 Oct 2018 Gregory Kahn, Adam Villaflor, Pieter Abbeel, Sergey Levine

We show that a simulated robotic car and a real-world RC car can gather data and train fully autonomously without any human-provided labels beyond those needed to train the detectors, and then at test-time be able to accomplish a variety of different tasks.

Robot Navigation

SFV: Reinforcement Learning of Physical Skills from Videos

1 code implementation8 Oct 2018 Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, Sergey Levine

In this paper, we propose a method that enables physically simulated characters to learn skills from videos (SFV).

Motion Capture Pose Estimation

Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow

5 code implementations ICLR 2019 Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine

By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients.

Continuous Control Image Generation +1

Model-Based Reinforcement Learning via Meta-Policy Optimization

no code implementations14 Sep 2018 Ignasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita, Tamim Asfour, Pieter Abbeel

Finally, we demonstrate that our approach is able to match the asymptotic performance of model-free methods while requiring significantly less experience.

Model-based Reinforcement Learning

SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning

1 code implementation ICLR 2019 Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine

Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images.

Model-based Reinforcement Learning

Transfer Learning for Estimating Causal Effects using Neural Networks

no code implementations23 Aug 2018 Sören R. Künzel, Bradly C. Stadie, Nikita Vemuri, Varsha Ramakrishnan, Jasjeet S. Sekhon, Pieter Abbeel

We develop new algorithms for estimating heterogeneous treatment effects, combining recent developments in transfer learning for neural networks with insights from the causal inference literature.

Causal Inference Transfer Learning

Variational Option Discovery Algorithms

no code implementations26 Jul 2018 Joshua Achiam, Harrison Edwards, Dario Amodei, Pieter Abbeel

We explore methods for option discovery based on variational inference and make two algorithmic contributions.

Curriculum Learning Variational Inference

Learning Plannable Representations with Causal InfoGAN

1 code implementation NeurIPS 2018 Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart Russell, Pieter Abbeel

Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations.

Representation Learning

Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control

1 code implementation ICML 2018 Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, Chelsea Finn

A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization.

Imitation Learning

The Limits and Potentials of Deep Learning for Robotics

no code implementations18 Apr 2018 Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, Peter Corke

In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning.

Robotics

Latent Space Policies for Hierarchical Reinforcement Learning

no code implementations ICML 2018 Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, Sergey Levine

In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective.

Hierarchical Reinforcement Learning

DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills

6 code implementations8 Apr 2018 Xue Bin Peng, Pieter Abbeel, Sergey Levine, Michiel Van de Panne

We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills.

Stochastic Adversarial Video Prediction

3 code implementations ICLR 2019 Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine

However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging -- the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction.

Representation Learning Video Generation +1

Universal Planning Networks

1 code implementation2 Apr 2018 Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, Chelsea Finn

We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images.

Imitation Learning Representation Learning +1

Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning

2 code implementations ICLR 2019 Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S. Fearing, Pieter Abbeel, Sergey Levine, Chelsea Finn

Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time.

Continuous Control Meta-Learning +3

Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines

no code implementations ICLR 2018 Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M. Bayen, Sham Kakade, Igor Mordatch, Pieter Abbeel

To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP.

Policy Gradient Methods

Learning Robotic Assembly from CAD

no code implementations20 Mar 2018 Garrett Thomas, Melissa Chien, Aviv Tamar, Juan Aparicio Ojea, Pieter Abbeel

We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data.

Motion Planning

Composable Deep Reinforcement Learning for Robotic Manipulation

1 code implementation19 Mar 2018 Tuomas Haarnoja, Vitchyr Pong, Aurick Zhou, Murtaza Dalal, Pieter Abbeel, Sergey Levine

Second, we show that policies learned with soft Q-learning can be composed to create new policies, and that the optimality of the resulting policy can be bounded in terms of the divergence between the composed policies.

Q-Learning

Accelerated Methods for Deep Reinforcement Learning

8 code implementations7 Mar 2018 Adam Stooke, Pieter Abbeel

Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice.

Atari Games

Model-Ensemble Trust-Region Policy Optimization

2 code implementations ICLR 2018 Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel

In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training.

Continuous Control Model-based Reinforcement Learning

Evolved Policy Gradients

3 code implementations NeurIPS 2018 Rein Houthooft, Richard Y. Chen, Phillip Isola, Bradly C. Stadie, Filip Wolski, Jonathan Ho, Pieter Abbeel

We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms.

Self-Supervised Learning of Object Motion Through Adversarial Video Prediction

no code implementations ICLR 2018 Alex X. Lee, Frederik Ebert, Richard Zhang, Chelsea Finn, Pieter Abbeel, Sergey Levine

In this paper, we study the problem of multi-step video prediction, where the goal is to predict a sequence of future frames conditioned on a short context.

Self-Supervised Learning Video Prediction

PixelSNAIL: An Improved Autoregressive Generative Model

4 code implementations ICML 2018 Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, Pieter Abbeel

Autoregressive generative models consistently achieve the best results in density estimation tasks involving high dimensional data, such as images or audio.

Density Estimation Image Generation +1

A Berkeley View of Systems Challenges for AI

no code implementations15 Dec 2017 Ion Stoica, Dawn Song, Raluca Ada Popa, David Patterson, Michael W. Mahoney, Randy Katz, Anthony D. Joseph, Michael Jordan, Joseph M. Hellerstein, Joseph E. Gonzalez, Ken Goldberg, Ali Ghodsi, David Culler, Pieter Abbeel

With the increasing commoditization of computer vision, speech recognition and machine translation systems and the widespread deployment of learning-based back-end technologies such as digital advertising and intelligent infrastructures, AI (Artificial Intelligence) has moved from research labs to production.

Machine Translation Speech Recognition

Safer Classification by Synthesis

no code implementations22 Nov 2017 William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel

We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models.

Classification General Classification

Inverse Reward Design

no code implementations NeurIPS 2017 Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart Russell, Anca Dragan

When designing the reward, we might think of some specific training scenarios, and make sure that the reward will lead to the right behavior in those scenarios.

Interpretable and Pedagogical Examples

no code implementations ICLR 2018 Smitha Milli, Pieter Abbeel, Igor Mordatch

Teachers intentionally pick the most informative examples to show their students.

Meta Learning Shared Hierarchies

2 code implementations ICLR 2018 Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, John Schulman

We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps.

Legged Robots Meta-Learning

Asymmetric Actor Critic for Image-Based Robot Learning

no code implementations18 Oct 2017 Lerrel Pinto, Marcin Andrychowicz, Peter Welinder, Wojciech Zaremba, Pieter Abbeel

While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator.

Decision Making

Sim-to-Real Transfer of Robotic Control with Dynamics Randomization

no code implementations18 Oct 2017 Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel

By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained.

Robotics Systems and Control

Domain Randomization and Generative Models for Robotic Grasping

no code implementations17 Oct 2017 Joshua Tobin, Lukas Biewald, Rocky Duan, Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew, Jonas Schneider, Peter Welinder, Wojciech Zaremba, Pieter Abbeel

In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis.

Robotic Grasping