no code implementations • 20 Dec 2023 • Weiwei Gu, Anant Sah, Nakul Gopalan
With both of these results we demonstrate the ability of our model to learn tasks and concepts in a continual learning setting on the robot.
no code implementations • 3 Oct 2023 • Ifrah Idrees, Tian Yun, Naveen Sharma, Yunxin Deng, Nakul Gopalan, George Konidaris, Stefanie Tellex
We propose a novel framework for plan and goal recognition in partially observable domains -- Dialogue for Goal Recognition (D4GR) enabling a robot to rectify its belief in human progress by asking clarification questions about noisy sensor data and sub-optimal human actions.
no code implementations • 1 Sep 2023 • Divyanshu Raj, Chitta Baral, Nakul Gopalan
In this work, we present an approach to identify sub-tasks within a demonstrated robot trajectory using language instructions.
no code implementations • 23 Mar 2022 • Max Zuo, Logan Schick, Matthew Gombolay, Nakul Gopalan
In each test, CA-RRT reached more states on average in the same number of iterations as weighted-RRT.
1 code implementation • Conference On Robot Learning (CoRL) 2021 • Andrew Hundt, Aditya Murali, Priyanka Hubli, Ran Liu, Nakul Gopalan, Matthew Gombolay, Gregory D. Hager
Based upon this insight, we propose See-SPOT-Run (SSR), a new computational approach to robot learning that enables a robot to complete a variety of real robot tasks in novel problem domains without task-specific training.
2 code implementations • Conference On Robot Learning (CoRL) 2021 • Elias Stengel-Eskin, Andrew Hundt, Zhuohong He, Aditya Murali, Nakul Gopalan, Matthew Gombolay, Gregory Hager
Our model completes block manipulation tasks with synthetic commands 530 more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning.
no code implementations • 9 Oct 2021 • Vanya Cohen, Geraud Nangue Tasse, Nakul Gopalan, Steven James, Matthew Gombolay, Benjamin Rosman
We propose a framework that learns to execute natural language instructions in an environment consisting of goal-reaching tasks that share components of their task descriptions.
1 code implementation • 18 Jan 2021 • Pradyumna Tambwekar, Andrew Silva, Nakul Gopalan, Matthew Gombolay
Human-AI policy specification is a novel procedure we define in which humans can collaboratively warm-start a robot's reinforcement learning policy.
1 code implementation • 23 Jun 2020 • Thao Nguyen, Nakul Gopalan, Roma Patel, Matt Corsaro, Ellie Pavlick, Stefanie Tellex
The model takes in a language command containing a verb, for example "Hand me something to cut," and RGB images of candidate objects and selects the object that best satisfies the task specified by the verb.
no code implementations • 30 May 2019 • Vanya Cohen, Benjamin Burchfiel, Thao Nguyen, Nakul Gopalan, Stefanie Tellex, George Konidaris
Our system is able to disambiguate between novel objects, observed via depth images, based on natural language descriptions.
no code implementations • 3 Dec 2018 • Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman
An agent with an inaccurate model of its environment faces a difficult choice: it can ignore the errors in its model and act in the real world in whatever way it determines is optimal with respect to its model.
1 code implementation • WS 2017 • Siddharth Karamcheti, Edward C. Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex
Robots operating alongside humans in diverse, stochastic environments must be able to accurately interpret natural language commands.
1 code implementation • 21 Apr 2017 • Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex
In this work, by grounding commands to all the tasks or subtasks available in a hierarchical planning framework, we arrive at a model capable of interpreting language at multiple levels of specificity ranging from coarse to more granular.