Search Results for author: Karol Hausman

Found 48 papers, 13 papers with code

Chain of Code: Reasoning with a Language Model-Augmented Code Emulator

no code implementations7 Dec 2023 Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, Brian Ichter

For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable).

Language Modelling

Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities

no code implementations4 Dec 2023 Markus Wulfmeier, Arunkumar Byravan, Sarah Bechtle, Karol Hausman, Nicolas Heess

Contemporary artificial intelligence systems exhibit rapidly growing abilities accompanied by the growth of required resources, expansive datasets and corresponding investments into computing infrastructure.

Computational Efficiency reinforcement-learning +1

SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention

no code implementations4 Dec 2023 Isabel Leal, Krzysztof Choromanski, Deepali Jain, Avinava Dubey, Jake Varley, Michael Ryoo, Yao Lu, Frederick Liu, Vikas Sindhwani, Quan Vuong, Tamas Sarlos, Ken Oslund, Karol Hausman, Kanishka Rao

We present Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT): a new paradigm for addressing the emerging challenge of scaling up Robotics Transformers (RT) for on-robot deployment.

What Makes Pre-Trained Visual Representations Successful for Robust Manipulation?

no code implementations3 Nov 2023 Kaylee Burns, Zach Witzel, Jubayer Ibn Hamid, Tianhe Yu, Chelsea Finn, Karol Hausman

Inspired by the success of transfer learning in computer vision, roboticists have investigated visual pre-training as a means to improve the learning efficiency and generalization ability of policies learned from pixels.

Out-of-Distribution Generalization Transfer Learning

Open-World Object Manipulation using Pre-trained Vision-Language Models

no code implementations2 Mar 2023 Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani, Brianna Zitkovich, Fei Xia, Chelsea Finn, Karol Hausman

This brings up a notably difficult challenge for robots: while robot learning approaches allow robots to learn many different behaviors from first-hand experience, it is impractical for robots to have first-hand experiences that span all of this semantic information.

Language Modelling Object

Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents

no code implementations NeurIPS 2023 Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter

Recent progress in large language models (LLMs) has demonstrated the ability to learn and leverage Internet-scale knowledge through pre-training with autoregressive models.

Language Modelling Text Generation

Scaling Robot Learning with Semantically Imagined Experience

no code implementations22 Feb 2023 Tianhe Yu, Ted Xiao, Austin Stone, Jonathan Tompson, Anthony Brohan, Su Wang, Jaspiar Singh, Clayton Tan, Dee M, Jodilyn Peralta, Brian Ichter, Karol Hausman, Fei Xia

Specifically, we make use of the state of the art text-to-image diffusion models and perform aggressive data augmentation on top of our existing robotic manipulation datasets via inpainting various unseen objects for manipulation, backgrounds, and distractors with text guidance.

Data Augmentation

Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models

no code implementations21 Nov 2022 Ted Xiao, Harris Chan, Pierre Sermanet, Ayzaan Wahid, Anthony Brohan, Karol Hausman, Sergey Levine, Jonathan Tompson

To accomplish this, we introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL): we utilize semi-supervised language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabelled demonstration data and then train language-conditioned policies on the augmented datasets.

Imitation Learning

Offline Reinforcement Learning at Multiple Frequencies

no code implementations26 Jul 2022 Kaylee Burns, Tianhe Yu, Chelsea Finn, Karol Hausman

In this paper, we focus on one particular aspect of heterogeneity: learning from offline data collected at different control frequencies.

Offline RL reinforcement-learning +1

Jump-Start Reinforcement Learning

no code implementations5 Apr 2022 Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman

In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.

reinforcement-learning Reinforcement Learning (RL)

Autonomous Reinforcement Learning: Formalism and Benchmarking

2 code implementations ICLR 2022 Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

In this paper, we aim to address this discrepancy by laying out a framework for Autonomous Reinforcement Learning (ARL): reinforcement learning where the agent not only learns through its own experience, but also contends with lack of human supervision to reset between trials.

Benchmarking reinforcement-learning +1

Data Sharing without Rewards in Multi-Task Offline Reinforcement Learning

no code implementations29 Sep 2021 Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Chelsea Finn, Sergey Levine, Karol Hausman

However, these benefits come at a cost -- for data to be shared between tasks, each transition must be annotated with reward labels corresponding to other tasks.

Multi-Task Learning Offline RL +2

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning

no code implementations NeurIPS 2021 Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn

We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation.

Offline RL reinforcement-learning +1

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

no code implementations16 Apr 2021 Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman

In this paper, we study how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously, sharing exploration, experience, and representations across tasks.

reinforcement-learning Reinforcement Learning (RL)

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

no code implementations15 Apr 2021 Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine

We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data.

Q-Learning reinforcement-learning +1

A Geometric Perspective on Self-Supervised Policy Adaptation

no code implementations14 Nov 2020 Cristian Bodnar, Karol Hausman, Gabriel Dulac-Arnold, Rico Jonschkowski

One of the most challenging aspects of real-world reinforcement learning (RL) is the multitude of unpredictable and ever-changing distractions that could divert an agent from what was tasked to do in its training environment.

Reinforcement Learning (RL)

Confidence-rich grid mapping

no code implementations29 Jun 2020 Ali-akbar Agha-mohammadi, Eric Heiden, Karol Hausman, Gaurav S. Sukhatme

Representing the environment is a fundamental task in enabling robots to act autonomously in unknown environments.

Motion Planning

Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation

no code implementations ICML Workshop LifelongML 2020 Ryan Julian, Benjamin Swanson, Gaurav S. Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman

One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments, but most robot learning systems today are deployed as fixed policies which do not adapt after deployment.

Continual Learning Robotic Grasping

Modeling Long-horizon Tasks as Sequential Interaction Landscapes

no code implementations8 Jun 2020 Sören Pirk, Karol Hausman, Alexander Toshev, Mohi Khansari

We show that complex plans can be carried out when executing the robotic task and the robot can interactively adapt to changes in the environment and recover from failure cases.

Robot Manipulation

Dynamics-Aware Unsupervised Skill Discovery

1 code implementation ICLR 2020 Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment.

Model-based Reinforcement Learning

Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning

2 code implementations27 Apr 2020 Archit Sharma, Michael Ahn, Sergey Levine, Vikash Kumar, Karol Hausman, Shixiang Gu

Can we instead develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks?

Model Predictive Control reinforcement-learning +2

Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning

no code implementations21 Apr 2020 Ryan Julian, Benjamin Swanson, Gaurav S. Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman

One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments.

Continual Learning reinforcement-learning +2

Thinking While Moving: Deep Reinforcement Learning with Concurrent Control

no code implementations ICLR 2020 Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog

We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action.

reinforcement-learning Reinforcement Learning (RL) +1

Gradient Surgery for Multi-Task Learning

9 code implementations NeurIPS 2020 Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn

While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge.

Image Classification Multi-Task Learning +1

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

1 code implementation25 Oct 2019 Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman

We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks.

Imitation Learning reinforcement-learning +1

Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning

8 code implementations24 Oct 2019 Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Avnish Narayan, Hayden Shively, Adithya Bellathur, Karol Hausman, Chelsea Finn, Sergey Levine

Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors.

Meta-Learning Meta Reinforcement Learning +3

Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping

no code implementations1 Oct 2019 Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor, Mrinal Kalakrishnan

The absence of an actor in Q2-Opt allows us to directly draw a parallel to the previous discrete experiments in the literature without the additional complexities induced by an actor-critic architecture.

Q-Learning Reinforcement Learning (RL) +1

Mint: Matrix-Interleaving for Multi-Task Learning

no code implementations25 Sep 2019 Tianhe Yu, Saurabh Kumar, Eric Mitchell, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data.

Multi-Task Learning reinforcement-learning +1

Dynamics-Aware Unsupervised Discovery of Skills

3 code implementations2 Jul 2019 Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment.

Model-based Reinforcement Learning

Learning to Interactively Learn and Assist

no code implementations24 Jun 2019 Mark Woodward, Chelsea Finn, Karol Hausman

Most importantly, we find that our approach produces an agent that is capable of learning interactively from a human user, without a set of explicit demonstrations or a reward function, and achieving significantly better performance cooperatively with a human than a human performing the task alone.

Imitation Learning Question Answering

Training an Interactive Helper

no code implementations24 Jun 2019 Mark Woodward, Chelsea Finn, Karol Hausman

In this paper, we investigate if, and how, a "helper" agent can be trained to interactively adapt their behavior to maximize the reward of another agent, whom we call the "prime" agent, without observing their reward or receiving explicit demonstrations.

Meta-Learning

Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

1 code implementation4 Oct 2018 Zhanpeng He, Ryan Julian, Eric Heiden, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

We complete unseen tasks by choosing new sequences of skill latents to control the robot using MPC, where our MPC model is composed of the pre-trained skill policy executed in the simulation environment, run in parallel with the real robot.

Model Predictive Control Zero-shot Generalization

Scaling simulation-to-real transfer by learning composable robot skills

1 code implementation26 Sep 2018 Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

In particular, we first use simulation to jointly learn a policy for a set of low-level skills, and a "skill embedding" parameterization which can be used to compose them.

Region Growing Curriculum Generation for Reinforcement Learning

no code implementations4 Jul 2018 Artem Molchanov, Karol Hausman, Stan Birchfield, Gaurav Sukhatme

In this work, we introduce a method based on region-growing that allows learning in an environment with any pair of initial and goal states.

reinforcement-learning Reinforcement Learning (RL)

Interactive Perception: Leveraging Action in Perception and Perception in Action

no code implementations13 Apr 2016 Jeannette Bohg, Karol Hausman, Bharath Sankaran, Oliver Brock, Danica Kragic, Stefan Schaal, Gaurav Sukhatme

Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.