Search Results for author: Rohan Paul

Found 7 papers, 4 papers with code

Learning Neuro-symbolic Programs for Language Guided Robot Manipulation

no code implementations12 Nov 2022 Namasivayam Kalithasan, Himanshu Singh, Vishal Bindal, Arnav Tuli, Vishwajeet Agrawal, Rahul Jain, Parag Singla, Rohan Paul

Given a natural language instruction and an input scene, our goal is to train a model to output a manipulation program that can be executed by the robot.

Robot Manipulation

GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction Following

1 code implementation14 May 2022 Shreya Sharma, Jigyasa Gupta, Shreshth Tuli, Rohan Paul, Mausam

Our goal is to enable a robot to learn how to sequence its actions to perform tasks specified as natural language instructions, given successful demonstrations from a human partner.

Decision Making Instruction Following

TANGO: Commonsense Generalization in Predicting Tool Interactions for Mobile Manipulators

1 code implementation5 May 2021 Shreshth Tuli, Rajas Bansal, Rohan Paul, Mausam

We introduce a novel neural model, termed TANGO, for predicting task-specific tool interactions, trained using demonstrations from human teachers instructing a virtual robot.

Multi-facet Universal Schema

no code implementations EACL 2021 Rohan Paul, Haw-Shiuan Chang, Andrew McCallum

To address the violation of the USchema assumption, we propose multi-facet universal schema that uses a neural model to represent each sentence pattern as multiple facet embeddings and encourage one of these facet embeddings to be close to that of another sentence pattern if they co-occur with the same entity pair.

Relation Extraction

ToolNet: Using Commonsense Generalization for Predicting Tool Use for Robot Plan Synthesis

1 code implementation9 Jun 2020 Rajas Bansal, Shreshth Tuli, Rohan Paul, Mausam

When compared to a graph neural network baseline, it achieves 14-27% accuracy improvement for predicting known tools from new world scenes, and 44-67% improvement in generalization for novel objects not encountered during training.


Leveraging Past References for Robust Language Grounding

no code implementations CONLL 2019 Subhro Roy, Michael Noseworthy, Rohan Paul, Daehyung Park, Nicholas Roy

We therefore reframe the grounding problem from the perspective of coreference detection and propose a neural network that detects when two expressions are referring to the same object.

Referring Expression Visual Grounding

Cannot find the paper you are looking for? You can Submit a new open access paper.