Search Results for author: Leslie Pack Kaelbling

Found 51 papers, 17 papers with code

Learning Neuro-Symbolic Skills for Bilevel Planning

no code implementations21 Jun 2022 Tom Silver, Ashay Athalye, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Decision-making is challenging in robotics environments with continuous object-centric states, continuous actions, long horizons, and sparse feedback.

Decision Making Motion Planning

PG3: Policy-Guided Planning for Generalized Policy Generation

1 code implementation21 Apr 2022 Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Pack Kaelbling

In this work, we study generalized policy search-based methods with a focus on the score function used to guide the search over policies.

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

no code implementations17 Mar 2022 Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

In this paper, we develop a novel framework for learning state and action abstractions that are explicitly optimized for both effective (successful) and efficient (fast) bilevel planning.

Representation, learning, and planning algorithms for geometric task and motion planning

no code implementations9 Mar 2022 Beomjoon Kim, Luke Shimanuki, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The first is an algorithm for learning a rank function that guides the discrete task level search, and the second is an algorithm for learning a sampler that guides the continuous motionlevel search.

Motion Planning Representation Learning

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

no code implementations30 Sep 2021 Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.

reinforcement-learning

On the Expressiveness and Learning of Relational Neural Networks on Hypergraphs

no code implementations29 Sep 2021 Zhezheng Luo, Jiayuan Mao, Joshua B. Tenenbaum, Leslie Pack Kaelbling

Our first contribution is a fine-grained analysis of the expressiveness of these neural networks, that is, the set of functions that they can realize and the set of problems that they can solve.

Learning Rational Skills for Planning from Demonstrations and Instructions

no code implementations29 Sep 2021 Zhezheng Luo, Jiayuan Mao, Jiajun Wu, Tomas Perez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

We present a framework for learning compositional, rational skill models (RatSkills) that support efficient planning and inverse planning for achieving novel goals and recognizing activities.

Efficient Training and Inference of Hypergraph Reasoning Networks

no code implementations29 Sep 2021 Guangxuan Xiao, Leslie Pack Kaelbling, Jiajun Wu, Jiayuan Mao

To leverage the sparsity in hypergraph neural networks, SpaLoc represents the grounding of relationships such as parent and grandparent as sparse tensors and uses neural networks and finite-domain quantification operations to infer new facts based on the input.

Knowledge Graphs Logical Reasoning

Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances

no code implementations9 Aug 2021 Aidan Curtis, Xiaolin Fang, Leslie Pack Kaelbling, Tomás Lozano-Pérez, Caelan Reed Garrett

We present a strategy for designing and building very general robot manipulation systems involving the integration of a general-purpose task-and-motion planner with engineered and learned perception modules that estimate properties and affordances of unknown objects.

Grasp Generation Motion Planning

Temporal and Object Quantification Networks

no code implementations10 Jun 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer D. Ullman

We present Temporal and Object Quantification Networks (TOQ-Nets), a new class of neuro-symbolic networks with a structural bias that enables them to learn to recognize complex relational-temporal events.

Temporal Sequences

Learning Symbolic Operators for Task and Motion Planning

1 code implementation28 Feb 2021 Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.

Motion Planning Operator learning +1

Temporal and Object Quantification Nets

no code implementations1 Jan 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer Ullman

We aim to learn generalizable representations for complex activities by quantifying over both entities and time, as in “the kicker is behind all the other players,” or “the player controls the ball until it moves toward the goal.” Such a structural inductive bias of object relations, object quantification, and temporal orders will enable the learned representation to generalize to situations with varying numbers of agents, objects, and time courses.

Event Detection Inductive Bias

Measuring few-shot extrapolation with program induction

no code implementations NeurIPS Workshop CAP 2020 Ferran Alet, Javier Lopez-Contreras, Joshua B. Tenenbaum, Tomas Perez, Leslie Pack Kaelbling

Program induction lies at the opposite end of the spectrum: programs are capable of extrapolating from very few examples, but we still do not know how to efficiently search for complex programs.

Meta-Learning Program induction

Integrated Task and Motion Planning

no code implementations2 Oct 2020 Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP).

Motion Planning

Learning Online Data Association

no code implementations28 Sep 2020 Yilun Du, Joshua B. Tenenbaum, Tomas Perez, Leslie Pack Kaelbling

When an agent interacts with a complex environment, it receives a stream of percepts in which it may detect entities, such as objects or people.

Representation Learning

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning

CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs

1 code implementation26 Jul 2020 Rohan Chitnis, Tom Silver, Beomjoon Kim, Leslie Pack Kaelbling, Tomas Lozano-Perez

A general meta-planning strategy is to learn to impose constraints on the states considered and actions taken by the agent.

Motion Planning

Meta-learning curiosity algorithms

1 code implementation ICLR 2020 Ferran Alet, Martin F. Schneider, Tomas Lozano-Perez, Leslie Pack Kaelbling

We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime.

Acrobot Meta-Learning

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

1 code implementation22 Jan 2020 Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.

Decision Making Efficient Exploration +2

Neural Relational Inference with Fast Modular Meta-learning

1 code implementation NeurIPS 2019 Ferran Alet, Erica Weng, Tomás Lozano-Pérez, Leslie Pack Kaelbling

Framing inference as the inner-loop optimization of meta-learning leads to a model-based approach that is more data-efficient and capable of estimating the state of entities that we do not observe directly, but whose existence can be inferred from their effect on observed entities.

Meta-Learning

Online Replanning in Belief Space for Partially Observable Task and Motion Problems

1 code implementation11 Nov 2019 Caelan Reed Garrett, Chris Paxton, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Dieter Fox

To solve multi-step manipulation tasks in the real world, an autonomous robot must take actions to observe its environment and react to unexpected observations.

Continuous Control

Differentiable Algorithm Networks for Composable Robot Learning

no code implementations28 May 2019 Peter Karkus, Xiao Ma, David Hsu, Leslie Pack Kaelbling, Wee Sun Lee, Tomas Lozano-Perez

This paper introduces the Differentiable Algorithm Network (DAN), a composable architecture for robot learning systems.

Navigate

Graph Element Networks: adaptive, structured computation and memory

2 code implementations18 Apr 2019 Ferran Alet, Adarsh K. Jeewajee, Maria Bauza, Alberto Rodriguez, Tomas Lozano-Perez, Leslie Pack Kaelbling

We explore the use of graph neural networks (GNNs) to model spatial processes in which there is no a priori graphical structure.

Few-Shot Bayesian Imitation Learning with Logical Program Policies

no code implementations12 Apr 2019 Tom Silver, Kelsey R. Allen, Alex K. Lew, Leslie Pack Kaelbling, Josh Tenenbaum

We propose an expressive class of policies, a strong but general prior, and a learning algorithm that, together, can learn interesting policies from very few examples.

Bayesian Inference Imitation Learning +1

Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning

no code implementations7 Apr 2019 Kenji Kawaguchi, Jiaoyang Huang, Leslie Pack Kaelbling

Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights.

BIG-bench Machine Learning Representation Learning

Elimination of All Bad Local Minima in Deep Learning

no code implementations2 Jan 2019 Kenji Kawaguchi, Leslie Pack Kaelbling

At every local minimum of any deep neural network with these added neurons, the set of parameters of the original neural network (without added neurons) is guaranteed to be a global minimum of the original neural network.

General Classification Multi-class Classification

Effect of Depth and Width on Local Minima in Deep Learning

no code implementations20 Nov 2018 Kenji Kawaguchi, Jiaoyang Huang, Leslie Pack Kaelbling

In this paper, we analyze the effects of depth and width on the quality of local minima, without strong over-parameterization and simplification assumptions in the literature.

Learning sparse relational transition models

no code implementations ICLR 2019 Victoria Xia, Zi Wang, Leslie Pack Kaelbling

For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state.

Learning Quickly to Plan Quickly Using Modular Meta-Learning

1 code implementation20 Sep 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Multi-object manipulation problems in continuous state and action spaces can be solved by planners that search over sampled values for the continuous parameters of operators.

Meta-Learning

Learning to guide task and motion planning using score-space representation

no code implementations26 Jul 2018 Beomjoon Kim, Zi Wang, Leslie Pack Kaelbling, Tomas Lozano-Perez

In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems.

Motion Planning

Learning What Information to Give in Partially Observed Domains

no code implementations21 May 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment.

Active model learning and diverse action sampling for task and motion planning

2 code implementations2 Mar 2018 Zi Wang, Caelan Reed Garrett, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Solving long-horizon problems in complex domains requires flexible generative planning that can combine primitive abilities in novel combinations to solve problems as they arise in the world.

Active Learning Motion Planning

Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

no code implementations28 Feb 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations.

PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning

4 code implementations23 Feb 2018 Caelan Reed Garrett, Tomás Lozano-Pérez, Leslie Pack Kaelbling

We extend PDDL to support a generic, declarative specification for these procedures that treats their implementation as black boxes.

Motion Planning

Generalization in Machine Learning via Analytical Learning Theory

2 code implementations21 Feb 2018 Kenji Kawaguchi, Yoshua Bengio, Vikas Verma, Leslie Pack Kaelbling

This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.

BIG-bench Machine Learning Learning Theory +2

Learning to select examples for program synthesis

no code implementations ICLR 2018 Yewen Pu, Zachery Miranda, Armando Solar-Lezama, Leslie Pack Kaelbling

In this paper we address this challenge by constructing a representative subset of examples that is both small and is able to constrain the solver sufficiently.

Program Synthesis

Selecting Representative Examples for Program Synthesis

1 code implementation ICML 2018 Yewen Pu, Zachery Miranda, Armando Solar-Lezama, Leslie Pack Kaelbling

Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, mapping the inputs to their corresponding outputs exactly.

Program Synthesis

Guiding the search in continuous state-action spaces by learning an action sampling distribution from off-target samples

no code implementations4 Nov 2017 Beomjoon Kim, Leslie Pack Kaelbling, Tomas Lozano-Perez

For such complex planning problems, unguided uniform sampling of actions until a path to a goal is found is hopelessly inefficient, and gradient-based approaches often fall short when the optimization manifold of a given problem is not smooth.

Generalization in Deep Learning

no code implementations16 Oct 2017 Kenji Kawaguchi, Leslie Pack Kaelbling, Yoshua Bengio

This paper provides theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature.

STRIPS Planning in Infinite Domains

4 code implementations1 Jan 2017 Caelan Reed Garrett, Tomás Lozano-Pérez, Leslie Pack Kaelbling

We introduce STRIPStream: an extension of the STRIPS language which can model these domains by supporting the specification of blackbox generators to handle complex constraints.

Motion Planning

Focused Model-Learning and Planning for Non-Gaussian Continuous State-Action Systems

no code implementations26 Jul 2016 Zi Wang, Stefanie Jegelka, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We introduce a framework for model learning and planning in stochastic domains with continuous state and action spaces and non-Gaussian transition models.

Backward-Forward Search for Manipulation Planning

no code implementations12 Apr 2016 Caelan Reed Garrett, Tomas Lozano-Perez, Leslie Pack Kaelbling

In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects.

Bayesian Optimization with Exponential Convergence

no code implementations NeurIPS 2015 Kenji Kawaguchi, Leslie Pack Kaelbling, Tomás Lozano-Pérez

This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the delta-cover sampling.

Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures

no code implementations2 Dec 2015 Lawson L. S. Wong, Thanard Kurutach, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We refer to this attribute-based representation as a world model, and consider how to acquire it via noisy perception and maintain it over time, as objects are added, changed, and removed in the world.

Cannot find the paper you are looking for? You can Submit a new open access paper.