Search Results for author: Leslie Pack Kaelbling

Found 66 papers, 24 papers with code

Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures

no code implementations2 Dec 2015 Lawson L. S. Wong, Thanard Kurutach, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We refer to this attribute-based representation as a world model, and consider how to acquire it via noisy perception and maintain it over time, as objects are added, changed, and removed in the world.

Attribute Clustering

Bayesian Optimization with Exponential Convergence

no code implementations NeurIPS 2015 Kenji Kawaguchi, Leslie Pack Kaelbling, Tomás Lozano-Pérez

This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the delta-cover sampling.

Bayesian Optimization

Backward-Forward Search for Manipulation Planning

no code implementations12 Apr 2016 Caelan Reed Garrett, Tomas Lozano-Perez, Leslie Pack Kaelbling

In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects.

Focused Model-Learning and Planning for Non-Gaussian Continuous State-Action Systems

no code implementations26 Jul 2016 Zi Wang, Stefanie Jegelka, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We introduce a framework for model learning and planning in stochastic domains with continuous state and action spaces and non-Gaussian transition models.

STRIPS Planning in Infinite Domains

4 code implementations1 Jan 2017 Caelan Reed Garrett, Tomás Lozano-Pérez, Leslie Pack Kaelbling

We introduce STRIPStream: an extension of the STRIPS language which can model these domains by supporting the specification of blackbox generators to handle complex constraints.

Motion Planning Task and Motion Planning

Generalization in Deep Learning

no code implementations16 Oct 2017 Kenji Kawaguchi, Leslie Pack Kaelbling, Yoshua Bengio

This paper provides theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature.

Open-Ended Question Answering

Guiding the search in continuous state-action spaces by learning an action sampling distribution from off-target samples

no code implementations4 Nov 2017 Beomjoon Kim, Leslie Pack Kaelbling, Tomas Lozano-Perez

For such complex planning problems, unguided uniform sampling of actions until a path to a goal is found is hopelessly inefficient, and gradient-based approaches often fall short when the optimization manifold of a given problem is not smooth.

Generative Adversarial Network

Selecting Representative Examples for Program Synthesis

1 code implementation ICML 2018 Yewen Pu, Zachery Miranda, Armando Solar-Lezama, Leslie Pack Kaelbling

Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, mapping the inputs to their corresponding outputs exactly.

Program Synthesis

Learning to select examples for program synthesis

no code implementations ICLR 2018 Yewen Pu, Zachery Miranda, Armando Solar-Lezama, Leslie Pack Kaelbling

In this paper we address this challenge by constructing a representative subset of examples that is both small and is able to constrain the solver sufficiently.

Program Synthesis

Generalization in Machine Learning via Analytical Learning Theory

2 code implementations21 Feb 2018 Kenji Kawaguchi, Yoshua Bengio, Vikas Verma, Leslie Pack Kaelbling

This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.

BIG-bench Machine Learning Learning Theory +2

PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning

4 code implementations23 Feb 2018 Caelan Reed Garrett, Tomás Lozano-Pérez, Leslie Pack Kaelbling

We extend PDDL to support a generic, declarative specification for these procedures that treats their implementation as black boxes.

Motion Planning

Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

no code implementations28 Feb 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations.

Active model learning and diverse action sampling for task and motion planning

2 code implementations2 Mar 2018 Zi Wang, Caelan Reed Garrett, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Solving long-horizon problems in complex domains requires flexible generative planning that can combine primitive abilities in novel combinations to solve problems as they arise in the world.

Active Learning Motion Planning +1

Learning What Information to Give in Partially Observed Domains

no code implementations21 May 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment.

Learning Quickly to Plan Quickly Using Modular Meta-Learning

1 code implementation20 Sep 2018 Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Multi-object manipulation problems in continuous state and action spaces can be solved by planners that search over sampled values for the continuous parameters of operators.

Meta-Learning

Learning sparse relational transition models

no code implementations ICLR 2019 Victoria Xia, Zi Wang, Leslie Pack Kaelbling

For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state.

Effect of Depth and Width on Local Minima in Deep Learning

no code implementations20 Nov 2018 Kenji Kawaguchi, Jiaoyang Huang, Leslie Pack Kaelbling

In this paper, we analyze the effects of depth and width on the quality of local minima, without strong over-parameterization and simplification assumptions in the literature.

Elimination of All Bad Local Minima in Deep Learning

no code implementations2 Jan 2019 Kenji Kawaguchi, Leslie Pack Kaelbling

At every local minimum of any deep neural network with these added neurons, the set of parameters of the original neural network (without added neurons) is guaranteed to be a global minimum of the original neural network.

Binary Classification General Classification +1

Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning

no code implementations7 Apr 2019 Kenji Kawaguchi, Jiaoyang Huang, Leslie Pack Kaelbling

Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights.

BIG-bench Machine Learning Representation Learning

Few-Shot Bayesian Imitation Learning with Logical Program Policies

no code implementations12 Apr 2019 Tom Silver, Kelsey R. Allen, Alex K. Lew, Leslie Pack Kaelbling, Josh Tenenbaum

We propose an expressive class of policies, a strong but general prior, and a learning algorithm that, together, can learn interesting policies from very few examples.

Bayesian Inference Imitation Learning +1

Graph Element Networks: adaptive, structured computation and memory

2 code implementations18 Apr 2019 Ferran Alet, Adarsh K. Jeewajee, Maria Bauza, Alberto Rodriguez, Tomas Lozano-Perez, Leslie Pack Kaelbling

We explore the use of graph neural networks (GNNs) to model spatial processes in which there is no a priori graphical structure.

Differentiable Algorithm Networks for Composable Robot Learning

no code implementations28 May 2019 Peter Karkus, Xiao Ma, David Hsu, Leslie Pack Kaelbling, Wee Sun Lee, Tomas Lozano-Perez

This paper introduces the Differentiable Algorithm Network (DAN), a composable architecture for robot learning systems.

Navigate

Online Replanning in Belief Space for Partially Observable Task and Motion Problems

1 code implementation11 Nov 2019 Caelan Reed Garrett, Chris Paxton, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Dieter Fox

To solve multi-step manipulation tasks in the real world, an autonomous robot must take actions to observe its environment and react to unexpected observations.

Continuous Control

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

1 code implementation22 Jan 2020 Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.

Decision Making Efficient Exploration +3

Meta-learning curiosity algorithms

1 code implementation ICLR 2020 Ferran Alet, Martin F. Schneider, Tomas Lozano-Perez, Leslie Pack Kaelbling

We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime.

Acrobot Meta-Learning

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning Task and Motion Planning

Learning Online Data Association

no code implementations28 Sep 2020 Yilun Du, Joshua B. Tenenbaum, Tomas Perez, Leslie Pack Kaelbling

When an agent interacts with a complex environment, it receives a stream of percepts in which it may detect entities, such as objects or people.

Representation Learning

Integrated Task and Motion Planning

no code implementations2 Oct 2020 Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP).

Motion Planning Task and Motion Planning

Measuring few-shot extrapolation with program induction

no code implementations NeurIPS Workshop CAP 2020 Ferran Alet, Javier Lopez-Contreras, Joshua B. Tenenbaum, Tomas Perez, Leslie Pack Kaelbling

Program induction lies at the opposite end of the spectrum: programs are capable of extrapolating from very few examples, but we still do not know how to efficiently search for complex programs.

Meta-Learning Program induction

Temporal and Object Quantification Nets

no code implementations1 Jan 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer Ullman

We aim to learn generalizable representations for complex activities by quantifying over both entities and time, as in “the kicker is behind all the other players,” or “the player controls the ball until it moves toward the goal.” Such a structural inductive bias of object relations, object quantification, and temporal orders will enable the learned representation to generalize to situations with varying numbers of agents, objects, and time courses.

Event Detection Inductive Bias +1

Learning Symbolic Operators for Task and Motion Planning

1 code implementation28 Feb 2021 Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.

Motion Planning Operator learning +2

Temporal and Object Quantification Networks

no code implementations10 Jun 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer D. Ullman

We present Temporal and Object Quantification Networks (TOQ-Nets), a new class of neuro-symbolic networks with a structural bias that enables them to learn to recognize complex relational-temporal events.

Object Temporal Sequences

Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances

no code implementations9 Aug 2021 Aidan Curtis, Xiaolin Fang, Leslie Pack Kaelbling, Tomás Lozano-Pérez, Caelan Reed Garrett

We present a strategy for designing and building very general robot manipulation systems involving the integration of a general-purpose task-and-motion planner with engineered and learned perception modules that estimate properties and affordances of unknown objects.

Grasp Generation Motion Planning +2

Learning Rational Skills for Planning from Demonstrations and Instructions

no code implementations29 Sep 2021 Zhezheng Luo, Jiayuan Mao, Jiajun Wu, Tomas Perez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

We present a framework for learning compositional, rational skill models (RatSkills) that support efficient planning and inverse planning for achieving novel goals and recognizing activities.

On the Expressiveness and Learning of Relational Neural Networks on Hypergraphs

no code implementations29 Sep 2021 Zhezheng Luo, Jiayuan Mao, Joshua B. Tenenbaum, Leslie Pack Kaelbling

Our first contribution is a fine-grained analysis of the expressiveness of these neural networks, that is, the set of functions that they can realize and the set of problems that they can solve.

Efficient Training and Inference of Hypergraph Reasoning Networks

no code implementations29 Sep 2021 Guangxuan Xiao, Leslie Pack Kaelbling, Jiajun Wu, Jiayuan Mao

To leverage the sparsity in hypergraph neural networks, SpaLoc represents the grounding of relationships such as parent and grandparent as sparse tensors and uses neural networks and finite-domain quantification operations to infer new facts based on the input.

Knowledge Graphs Logical Reasoning +1

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

no code implementations30 Sep 2021 Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.

reinforcement-learning Reinforcement Learning (RL)

Representation, learning, and planning algorithms for geometric task and motion planning

no code implementations9 Mar 2022 Beomjoon Kim, Luke Shimanuki, Leslie Pack Kaelbling, Tomás Lozano-Pérez

The first is an algorithm for learning a rank function that guides the discrete task level search, and the second is an algorithm for learning a sampler that guides the continuous motionlevel search.

Motion Planning Representation Learning +1

Predicate Invention for Bilevel Planning

1 code implementation17 Mar 2022 Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective.

PG3: Policy-Guided Planning for Generalized Policy Generation

1 code implementation21 Apr 2022 Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Pack Kaelbling

In this work, we study generalized policy search-based methods with a focus on the score function used to guide the search over policies.

Learning Neuro-Symbolic Skills for Bilevel Planning

no code implementations21 Jun 2022 Tom Silver, Ashay Athalye, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Decision-making is challenging in robotics environments with continuous object-centric states, continuous actions, long horizons, and sparse feedback.

Decision Making Motion Planning +1

Learning Efficient Abstract Planning Models that Choose What to Predict

1 code implementation16 Aug 2022 Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tomás Lozano-Pérez, Leslie Pack Kaelbling

An effective approach to solving long-horizon tasks in robotics domains with continuous state and action spaces is bilevel planning, wherein a high-level search over an abstraction of an environment is used to guide low-level decision-making.

Decision Making Operator learning

SE(3)-Equivariant Relational Rearrangement with Neural Descriptor Fields

1 code implementation17 Nov 2022 Anthony Simeonov, Yilun Du, Lin Yen-Chen, Alberto Rodriguez, Leslie Pack Kaelbling, Tomas Lozano-Perez, Pulkit Agrawal

This formalism is implemented in three steps: assigning a consistent local coordinate frame to the task-relevant object parts, determining the location and orientation of this coordinate frame on unseen object instances, and executing an action that brings these frames into the desired alignment.

Object

On the Expressiveness and Generalization of Hypergraph Neural Networks

no code implementations9 Mar 2023 Zhezheng Luo, Jiayuan Mao, Joshua B. Tenenbaum, Leslie Pack Kaelbling

Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization.

PDSketch: Integrated Planning Domain Programming and Learning

no code implementations9 Mar 2023 Jiayuan Mao, Tomás Lozano-Pérez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

This paper studies a model learning and online planning approach towards building flexible and general robots.

Sparse and Local Networks for Hypergraph Reasoning

no code implementations9 Mar 2023 Guangxuan Xiao, Leslie Pack Kaelbling, Jiajun Wu, Jiayuan Mao

Reasoning about the relationships between entities from input facts (e. g., whether Ari is a grandparent of Charlie) generally requires explicit consideration of other entities that are not mentioned in the query (e. g., the parents of Charlie).

Knowledge Graphs World Knowledge

Learning Rational Subgoals from Demonstrations and Instructions

no code implementations9 Mar 2023 Zhezheng Luo, Jiayuan Mao, Jiajun Wu, Tomás Lozano-Pérez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

We present a framework for learning useful subgoals that support efficient long-term planning to achieve novel goals.

Generalized Planning in PDDL Domains with Pretrained Large Language Models

1 code implementation18 May 2023 Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz

We investigate whether LLMs can serve as generalized planners: given a domain and training tasks, generate a program that efficiently produces plans for other tasks in the domain.

Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation

1 code implementation27 Jul 2023 William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, Phillip Isola

Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization.

Few-Shot Learning Language Modelling

Compositional Diffusion-Based Continuous Constraint Solvers

no code implementations2 Sep 2023 Zhutian Yang, Jiayuan Mao, Yilun Du, Jiajun Wu, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling

This paper introduces an approach for learning to solve continuous constraint satisfaction problems (CCSP) in robotic reasoning and planning.

Neural Relational Inference with Fast Modular Meta-learning

1 code implementation NeurIPS 2019 Ferran Alet, Erica Weng, Tomás Lozano Pérez, Leslie Pack Kaelbling

Framing inference as the inner-loop optimization of meta-learning leads to a model-based approach that is more data-efficient and capable of estimating the state of entities that we do not observe directly, but whose existence can be inferred from their effect on observed entities.

Meta-Learning

Learning Reusable Manipulation Strategies

no code implementations6 Nov 2023 Jiayuan Mao, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling

Humans demonstrate an impressive ability to acquire and generalize manipulation "tricks."

Object

What Planning Problems Can A Relational Neural Network Solve?

1 code implementation NeurIPS 2023 Jiayuan Mao, Tomás Lozano-Pérez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

Goal-conditioned policies are generally understood to be "feed-forward" circuits, in the form of neural networks that map from the current state and the goal specification to the next action to take.

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

no code implementations22 Feb 2024 Nishanth Kumar, Tom Silver, Willie McClinton, Linfeng Zhao, Stephen Proulx, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Jennifer Barry

We consider a setting where a robot is initially equipped with (1) a library of parameterized skills, (2) an AI planner for sequencing together the skills given a goal, and (3) a very general prior distribution for selecting skill parameters.

Active Learning Decision Making

Partially Observable Task and Motion Planning with Uncertainty and Risk Awareness

no code implementations15 Mar 2024 Aidan Curtis, George Matheos, Nishad Gothoskar, Vikash Mansinghka, Joshua Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling

We propose a strategy for TAMP with Uncertainty and Risk Awareness (TAMPURA) that is capable of efficiently solving long-horizon planning problems with initial-state and action outcome uncertainty, including problems that require information gathering and avoiding undesirable and irreversible outcomes.

Motion Planning Task and Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.