Search Results for author: Nakul Gopalan

Found 13 papers, 6 papers with code

Interactive Visual Task Learning for Robots

no code implementations20 Dec 2023 Weiwei Gu, Anant Sah, Nakul Gopalan

With both of these results we demonstrate the ability of our model to learn tasks and concepts in a continual learning setting on the robot.

Continual Learning Novel Concepts +2

Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback

no code implementations3 Oct 2023 Ifrah Idrees, Tian Yun, Naveen Sharma, Yunxin Deng, Nakul Gopalan, George Konidaris, Stefanie Tellex

We propose a novel framework for plan and goal recognition in partially observable domains -- Dialogue for Goal Recognition (D4GR) enabling a robot to rectify its belief in human progress by asking clarification questions about noisy sensor data and sub-optimal human actions.

Language-Conditioned Change-point Detection to Identify Sub-Tasks in Robotics Domains

no code implementations1 Sep 2023 Divyanshu Raj, Chitta Baral, Nakul Gopalan

In this work, we present an approach to identify sub-tasks within a demonstrated robot trajectory using language instructions.

Change Point Detection Instruction Following +3

"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task Transfer

1 code implementation Conference On Robot Learning (CoRL) 2021 Andrew Hundt, Aditya Murali, Priyanka Hubli, Ran Liu, Nakul Gopalan, Matthew Gombolay, Gregory D. Hager

Based upon this insight, we propose See-SPOT-Run (SSR), a new computational approach to robot learning that enables a robot to complete a variety of real robot tasks in novel problem domains without task-specific training.

Few-Shot Learning Meta Reinforcement Learning +3

Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions

2 code implementations Conference On Robot Learning (CoRL) 2021 Elias Stengel-Eskin, Andrew Hundt, Zhuohong He, Aditya Murali, Nakul Gopalan, Matthew Gombolay, Gregory Hager

Our model completes block manipulation tasks with synthetic commands 530 more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning.

Instruction Following

Learning to Follow Language Instructions with Compositional Policies

no code implementations9 Oct 2021 Vanya Cohen, Geraud Nangue Tasse, Nakul Gopalan, Steven James, Matthew Gombolay, Benjamin Rosman

We propose a framework that learns to execute natural language instructions in an environment consisting of goal-reaching tasks that share components of their task descriptions.

Natural Language Specification of Reinforcement Learning Policies through Differentiable Decision Trees

1 code implementation18 Jan 2021 Pradyumna Tambwekar, Andrew Silva, Nakul Gopalan, Matthew Gombolay

Human-AI policy specification is a novel procedure we define in which humans can collaboratively warm-start a robot's reinforcement learning policy.

BIG-bench Machine Learning reinforcement-learning +1

Robot Object Retrieval with Contextual Natural Language Queries

1 code implementation23 Jun 2020 Thao Nguyen, Nakul Gopalan, Roma Patel, Matt Corsaro, Ellie Pavlick, Stefanie Tellex

The model takes in a language command containing a verb, for example "Hand me something to cut," and RGB images of candidate objects and selects the object that best satisfies the task specified by the verb.

Natural Language Queries Object +1

Grounding Language Attributes to Objects using Bayesian Eigenobjects

no code implementations30 May 2019 Vanya Cohen, Benjamin Burchfiel, Thao Nguyen, Nakul Gopalan, Stefanie Tellex, George Konidaris

Our system is able to disambiguate between novel objects, observed via depth images, based on natural language descriptions.

3D Shape Representation Object

Mitigating Planner Overfitting in Model-Based Reinforcement Learning

no code implementations3 Dec 2018 Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman

An agent with an inaccurate model of its environment faces a difficult choice: it can ignore the errors in its model and act in the real world in whatever way it determines is optimal with respect to its model.

Model-based Reinforcement Learning Position +2

Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities

1 code implementation21 Apr 2017 Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex

In this work, by grounding commands to all the tasks or subtasks available in a hierarchical planning framework, we arrive at a model capable of interpreting language at multiple levels of specificity ranging from coarse to more granular.

Specificity

Cannot find the paper you are looking for? You can Submit a new open access paper.