no code implementations • 5 Nov 2024 • Soroush Nasiriany, Sean Kirmani, Tianli Ding, Laura Smith, Yuke Zhu, Danny Driess, Dorsa Sadigh, Ted Xiao
Our method, RT-Affordance, is a hierarchical model that first proposes an affordance plan given the task language, and then conditions the policy on this affordance plan to perform manipulation.
no code implementations • 5 Nov 2024 • Laura Smith, Alex Irpan, Montserrat Gonzalez Arenas, Sean Kirmani, Dmitry Kalashnikov, Dhruv Shah, Ted Xiao
The complexity of the real world demands robotic systems that can intelligently adapt to unseen situations.
no code implementations • 15 Aug 2024 • Rafael Rafailov, Kyle Hatch, Anikait Singh, Laura Smith, Aviral Kumar, Ilya Kostrikov, Philippe Hansen-Estruch, Victor Kolev, Philip Ball, Jiajun Wu, Chelsea Finn, Sergey Levine
However, evaluating progress on offline RL algorithms requires effective and challenging benchmarks that capture properties of real-world tasks, provide a range of task difficulties, and cover a range of challenges both in terms of the parameters of the domain (e. g., length of the horizon, sparsity of rewards) and the parameters of the data (e. g., narrow demonstration data or broad exploratory data).
no code implementations • 2 Jul 2024 • Annie S. Chen, Alec M. Lessing, Andy Tang, Govind Chada, Laura Smith, Sergey Levine, Chelsea Finn
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
no code implementations • 7 Jun 2024 • Seungeun Rho, Laura Smith, Tianyu Li, Sergey Levine, Xue Bin Peng, Sehoon Ha
In this sense, we introduce Language Guided Skill Discovery (LGSD), a skill discovery framework that aims to directly maximize the semantic diversity between skills.
no code implementations • 2 Nov 2023 • Annie S. Chen, Govind Chada, Laura Smith, Archit Sharma, Zipeng Fu, Sergey Levine, Chelsea Finn
To succeed in the real world, robots must cope with situations that differ from those seen during training.
no code implementations • 26 Oct 2023 • Laura Smith, YunHao Cao, Sergey Levine
Deep reinforcement learning (RL) can enable robots to autonomously acquire complex behaviors, such as legged locomotion.
no code implementations • 19 Apr 2023 • Laura Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine
Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running.
1 code implementation • 9 Apr 2023 • Kevin Zakka, Philipp Wu, Laura Smith, Nimrod Gileadi, Taylor Howell, Xue Bin Peng, Sumeet Singh, Yuval Tassa, Pete Florence, Andy Zeng, Pieter Abbeel
Replicating human-like dexterity in robot hands represents one of the largest open problems in robotics.
2 code implementations • 6 Feb 2023 • Philip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine
Sample efficiency and exploration remain major challenges in online reinforcement learning (RL).
1 code implementation • 16 Aug 2022 • Laura Smith, Ilya Kostrikov, Sergey Levine
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments that do not require domain knowledge.
1 code implementation • 4 Nov 2021 • Kimin Lee, Laura Smith, Anca Dragan, Pieter Abbeel
However, it is difficult to quantify the progress in preference-based RL due to the lack of a commonly adopted benchmark.
1 code implementation • 8 Jul 2021 • Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine
If we can meta-train on offline data, then we can reuse the same static dataset, labeled once with rewards for different tasks, to meta-train policies that adapt to a variety of new tasks at meta-test time.
2 code implementations • 9 Jun 2021 • Kimin Lee, Laura Smith, Pieter Abbeel
We also show that our method is able to utilize real-time human feedback to effectively prevent reward exploitation and learn new behaviors that are difficult to specify with standard reward functions.
no code implementations • 10 Dec 2019 • Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine
In this paper, we study how these challenges can be alleviated with an automated robotic learning framework, in which multi-stage tasks are defined simply by providing videos of a human demonstrator and then learned autonomously by the robot from raw image observations.
no code implementations • EMNLP 2018 • Masoud Rouhizadeh, Kokil Jaidka, Laura Smith, H. Andrew Schwartz, Anneke Buffone, Lyle Ungar
Individuals express their locus of control, or {``}control{''}, in their language when they identify whether or not they are in control of their circumstances.
1 code implementation • ICLR 2019 • Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine
Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images.
Model-based Reinforcement Learning
reinforcement-learning
+2