Search Results for author: Rohan Chitnis

Found 17 papers, 6 papers with code

Towards Optimal Correlational Object Search

no code implementations19 Oct 2021 Kaiyu Zheng, Rohan Chitnis, Yoonchang Sung, George Konidaris, Stefanie Tellex

In this paper, we propose the Correlational Object Search POMDP (COS-POMDP), which can be solved to produce search strategies that use correlational information.

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

no code implementations30 Sep 2021 Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.

Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning

no code implementations28 May 2021 Rohan Chitnis, Tom Silver, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

In robotic domains, learning and planning are complicated by continuous state spaces, continuous action spaces, and long task horizons.

Model-based Reinforcement Learning

Learning Symbolic Operators for Task and Motion Planning

1 code implementation28 Feb 2021 Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.

Motion Planning Operator learning +1

Integrated Task and Motion Planning

no code implementations2 Oct 2020 Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP).

Motion Planning

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning

CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs

1 code implementation26 Jul 2020 Rohan Chitnis, Tom Silver, Beomjoon Kim, Leslie Pack Kaelbling, Tomas Lozano-Perez

A general meta-planning strategy is to learn to impose constraints on the states considered and actions taken by the agent.

Motion Planning

PDDLGym: Gym Environments from PDDL Problems

2 code implementations15 Feb 2020 Tom Silver, Rohan Chitnis

We present PDDLGym, a framework that automatically constructs OpenAI Gym environments from PDDL domains and problems.

Decision Making OpenAI Gym

Intrinsic Motivation for Encouraging Synergistic Behavior

no code implementations ICLR 2020 Rohan Chitnis, Shubham Tulsiani, Saurabh Gupta, Abhinav Gupta

Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

1 code implementation22 Jan 2020 Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.

Decision Making Efficient Exploration +1

Learning Compact Models for Planning with Exogenous Processes

no code implementations30 Sep 2019 Rohan Chitnis, Tomás Lozano-Pérez

We address the problem of approximate model minimization for MDPs in which the state is partitioned into endogenous and (much larger) exogenous components.

Efficient Bimanual Manipulation Using Learned Task Schemas

no code implementations30 Sep 2019 Rohan Chitnis, Shubham Tulsiani, Saurabh Gupta, Abhinav Gupta

Our insight is that for many tasks, the learning process can be decomposed into learning a state-independent task schema (a sequence of skills to execute) and a policy to choose the parameterizations of the skills in a state-dependent manner.

Learning Quickly to Plan Quickly Using Modular Meta-Learning

1 code implementation20 Sep 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Multi-object manipulation problems in continuous state and action spaces can be solved by planners that search over sampled values for the continuous parameters of operators.

Meta-Learning

Learning What Information to Give in Partially Observed Domains

no code implementations21 May 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment.

Finding Frequent Entities in Continuous Data

no code implementations8 May 2018 Ferran Alet, Rohan Chitnis, Leslie P. Kaelbling, Tomas Lozano-Perez

In many applications that involve processing high-dimensional data, it is important to identify a small set of entities that account for a significant fraction of detections.

Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

no code implementations28 Feb 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations.

Cannot find the paper you are looking for? You can Submit a new open access paper.