Search Results for author: Brian Ichter

Found 13 papers, 2 papers with code

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

no code implementations4 Apr 2022 Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan

We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment.

Decision Making Language Modelling

Chain of Thought Prompting Elicits Reasoning in Large Language Models

no code implementations28 Jan 2022 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning.

Language Modelling

Learning Language-Conditioned Robot Behavior from Offline Data and Crowd-Sourced Annotation

no code implementations2 Sep 2021 Suraj Nair, Eric Mitchell, Kevin Chen, Brian Ichter, Silvio Savarese, Chelsea Finn

However, goal images also have a number of drawbacks: they are inconvenient for humans to provide, they can over-specify the desired behavior leading to a sparse reward signal, or under-specify task information in the case of non-goal reaching tasks.

Broadly-Exploring, Local-Policy Trees for Long-Horizon Task Planning

no code implementations13 Oct 2020 Brian Ichter, Pierre Sermanet, Corey Lynch

This task space can be quite general and abstract; its only requirements are to be sampleable and to well-cover the space of useful tasks.

Motion Planning

Neural Collision Clearance Estimator for Batched Motion Planning

no code implementations14 Oct 2019 J. Chase Kew, Brian Ichter, Maryam Bandari, Tsang-Wei Edward Lee, Aleksandra Faust

We present a neural network collision checking heuristic, ClearanceNet, and a planning algorithm, CN-RRT.

Motion Planning

Learned Critical Probabilistic Roadmaps for Robotic Motion Planning

no code implementations8 Oct 2019 Brian Ichter, Edward Schmerling, Tsang-Wei Edward Lee, Aleksandra Faust

Critical PRMs are demonstrated to achieve up to three orders of magnitude improvement over uniform sampling, while preserving the guarantees and complexity of sampling-based motion planning.

Motion Planning

Zero-shot Imitation Learning from Demonstrations for Legged Robot Visual Navigation

no code implementations27 Sep 2019 Xinlei Pan, Tingnan Zhang, Brian Ichter, Aleksandra Faust, Jie Tan, Sehoon Ha

Here, we propose a zero-shot imitation learning approach for training a visual navigation policy on legged robots from human (third-person perspective) demonstrations, enabling high-quality navigation and cost-effective data collection.

Disentanglement Imitation Learning +2

Learning Sampling Distributions for Robot Motion Planning

2 code implementations16 Sep 2017 Brian Ichter, James Harrison, Marco Pavone

This paper proposes a methodology for non-uniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling.

Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.