Search Results for author: Tom Silver

Found 20 papers, 13 papers with code

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

no code implementations22 Feb 2024 Nishanth Kumar, Tom Silver, Willie McClinton, Linfeng Zhao, Stephen Proulx, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Jennifer Barry

We consider a setting where a robot is initially equipped with (1) a library of parameterized skills, (2) an AI planner for sequencing together the skills given a goal, and (3) a very general prior distribution for selecting skill parameters.

Active Learning Decision Making

Generalized Planning in PDDL Domains with Pretrained Large Language Models

1 code implementation18 May 2023 Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz

We investigate whether LLMs can serve as generalized planners: given a domain and training tasks, generate a program that efficiently produces plans for other tasks in the domain.

Embodied Active Learning of Relational State Abstractions for Bilevel Planning

no code implementations8 Mar 2023 Amber Li, Tom Silver

State abstraction is an effective technique for planning in robotics environments with continuous states and actions, long task horizons, and sparse feedback.

Active Learning Informativeness

Learning Efficient Abstract Planning Models that Choose What to Predict

1 code implementation16 Aug 2022 Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tomás Lozano-Pérez, Leslie Pack Kaelbling

An effective approach to solving long-horizon tasks in robotics domains with continuous state and action spaces is bilevel planning, wherein a high-level search over an abstraction of an environment is used to guide low-level decision-making.

Decision Making Operator learning

Learning Neuro-Symbolic Skills for Bilevel Planning

no code implementations21 Jun 2022 Tom Silver, Ashay Athalye, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Decision-making is challenging in robotics environments with continuous object-centric states, continuous actions, long horizons, and sparse feedback.

Decision Making Motion Planning +1

PG3: Policy-Guided Planning for Generalized Policy Generation

1 code implementation21 Apr 2022 Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Pack Kaelbling

In this work, we study generalized policy search-based methods with a focus on the score function used to guide the search over policies.

Predicate Invention for Bilevel Planning

1 code implementation17 Mar 2022 Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective.

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

no code implementations30 Sep 2021 Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.

reinforcement-learning Reinforcement Learning (RL)

Learning Symbolic Operators for Task and Motion Planning

1 code implementation28 Feb 2021 Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.

Motion Planning Operator learning +2

Online Bayesian Goal Inference for Boundedly Rational Planning Agents

no code implementations NeurIPS 2020 Tan Zhi-Xuan, Jordyn Mann, Tom Silver, Josh Tenenbaum, Vikash Mansinghka

These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent's goals and internal planning processes.

Bayesian Inference

Integrated Task and Motion Planning

no code implementations2 Oct 2020 Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP).

Motion Planning Task and Motion Planning

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning Task and Motion Planning

Online Bayesian Goal Inference for Boundedly-Rational Planning Agents

1 code implementation13 Jun 2020 Tan Zhi-Xuan, Jordyn L. Mann, Tom Silver, Joshua B. Tenenbaum, Vikash K. Mansinghka

These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent's goals and internal planning processes.

Bayesian Inference

PDDLGym: Gym Environments from PDDL Problems

1 code implementation15 Feb 2020 Tom Silver, Rohan Chitnis

We present PDDLGym, a framework that automatically constructs OpenAI Gym environments from PDDL domains and problems.

Decision Making OpenAI Gym +2

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

1 code implementation22 Jan 2020 Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.

Decision Making Efficient Exploration +3

Few-Shot Bayesian Imitation Learning with Logical Program Policies

no code implementations12 Apr 2019 Tom Silver, Kelsey R. Allen, Alex K. Lew, Leslie Pack Kaelbling, Josh Tenenbaum

We propose an expressive class of policies, a strong but general prior, and a learning algorithm that, together, can learn interesting policies from very few examples.

Bayesian Inference Imitation Learning +1

Residual Policy Learning

1 code implementation15 Dec 2018 Tom Silver, Kelsey Allen, Josh Tenenbaum, Leslie Kaelbling

In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvements.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.