Search Results for author: Aidan Curtis

Found 6 papers, 4 papers with code

PG3: Policy-Guided Planning for Generalized Policy Generation

1 code implementation21 Apr 2022 Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Pack Kaelbling

In this work, we study generalized policy search-based methods with a focus on the score function used to guide the search over policies.

Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances

no code implementations9 Aug 2021 Aidan Curtis, Xiaolin Fang, Leslie Pack Kaelbling, Tomás Lozano-Pérez, Caelan Reed Garrett

We present a strategy for designing and building very general robot manipulation systems involving the integration of a general-purpose task-and-motion planner with engineered and learned perception modules that estimate properties and affordances of unknown objects.

Grasp Generation Motion Planning +1

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning

Flexible and Efficient Long-Range Planning Through Curious Exploration

no code implementations ICML 2020 Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin Feigelis, Daniel Yamins

In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances.

Imitation Learning Model-based Reinforcement Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.