Search Results for author: Rohan Chitnis

Found 23 papers, 11 papers with code

PDDLGym: Gym Environments from PDDL Problems

1 code implementation15 Feb 2020 Tom Silver, Rohan Chitnis

We present PDDLGym, a framework that automatically constructs OpenAI Gym environments from PDDL domains and problems.

Decision Making OpenAI Gym +2

Predicate Invention for Bilevel Planning

1 code implementation17 Mar 2022 Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective.

Learning Quickly to Plan Quickly Using Modular Meta-Learning

1 code implementation20 Sep 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Multi-object manipulation problems in continuous state and action spaces can be solved by planners that search over sampled values for the continuous parameters of operators.

Meta-Learning

Learning Symbolic Operators for Task and Motion Planning

1 code implementation28 Feb 2021 Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.

Motion Planning Operator learning +2

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning Task and Motion Planning

Towards Optimal Correlational Object Search

1 code implementation19 Oct 2021 Kaiyu Zheng, Rohan Chitnis, Yoonchang Sung, George Konidaris, Stefanie Tellex

In realistic applications of object search, robots will need to locate target objects in complex environments while coping with unreliable sensors, especially for small or hard-to-detect objects.

Object

When should we prefer Decision Transformers for Offline Reinforcement Learning?

1 code implementation23 May 2023 Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang

Three popular algorithms for offline RL are Conservative Q-Learning (CQL), Behavior Cloning (BC), and Decision Transformer (DT), from the class of Q-Learning, Imitation Learning, and Sequence Modeling respectively.

D4RL Imitation Learning +5

Learning Efficient Abstract Planning Models that Choose What to Predict

1 code implementation16 Aug 2022 Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tomás Lozano-Pérez, Leslie Pack Kaelbling

An effective approach to solving long-horizon tasks in robotics domains with continuous state and action spaces is bilevel planning, wherein a high-level search over an abstraction of an environment is used to guide low-level decision-making.

Decision Making Operator learning

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

1 code implementation22 Jan 2020 Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.

Decision Making Efficient Exploration +3

Learning What Information to Give in Partially Observed Domains

no code implementations21 May 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment.

Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

no code implementations28 Feb 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations.

Finding Frequent Entities in Continuous Data

no code implementations8 May 2018 Ferran Alet, Rohan Chitnis, Leslie P. Kaelbling, Tomas Lozano-Perez

In many applications that involve processing high-dimensional data, it is important to identify a small set of entities that account for a significant fraction of detections.

Clustering

Efficient Bimanual Manipulation Using Learned Task Schemas

no code implementations30 Sep 2019 Rohan Chitnis, Shubham Tulsiani, Saurabh Gupta, Abhinav Gupta

Our insight is that for many tasks, the learning process can be decomposed into learning a state-independent task schema (a sequence of skills to execute) and a policy to choose the parameterizations of the skills in a state-dependent manner.

Learning Compact Models for Planning with Exogenous Processes

no code implementations30 Sep 2019 Rohan Chitnis, Tomás Lozano-Pérez

We address the problem of approximate model minimization for MDPs in which the state is partitioned into endogenous and (much larger) exogenous components.

Intrinsic Motivation for Encouraging Synergistic Behavior

no code implementations ICLR 2020 Rohan Chitnis, Shubham Tulsiani, Saurabh Gupta, Abhinav Gupta

Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.

Integrated Task and Motion Planning

no code implementations2 Oct 2020 Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP).

Motion Planning Task and Motion Planning

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

no code implementations30 Sep 2021 Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.

reinforcement-learning Reinforcement Learning (RL)

IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control

no code implementations1 Jun 2023 Rohan Chitnis, Yingchen Xu, Bobak Hashemi, Lucas Lehnert, Urun Dogan, Zheqing Zhu, Olivier Delalleau

Model-based reinforcement learning (RL) has shown great promise due to its sample efficiency, but still struggles with long-horizon sparse-reward tasks, especially in offline settings where the agent learns from a fixed dataset.

D4RL Model-based Reinforcement Learning +4

SMORE: Score Models for Offline Goal-Conditioned Reinforcement Learning

no code implementations3 Nov 2023 Harshit Sikchi, Rohan Chitnis, Ahmed Touati, Alborz Geramifard, Amy Zhang, Scott Niekum

Offline Goal-Conditioned Reinforcement Learning (GCRL) is tasked with learning to achieve multiple goals in an environment purely from offline datasets using sparse reward functions.

Contrastive Learning reinforcement-learning +1

Sequential Decision-Making for Inline Text Autocomplete

no code implementations21 Mar 2024 Rohan Chitnis, Shentao Yang, Alborz Geramifard

In particular, we hypothesize that the objectives under which sequential decision-making can improve autocomplete systems are not tailored solely to text entry speed, but more broadly to metrics such as user satisfaction and convenience.

Decision Making Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.