no code implementations • 16 Jan 2014 • Michael Katz, Carmel Domshlak
Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offline and then determining h(s) for each evaluated state s by a very fast lookup in a database.
2 code implementations • 1 Nov 2018 • Tengfei Ma, Patrick Ferber, Siyu Huo, Jie Chen, Michael Katz
Automated planning is one of the foundational areas of AI.
1 code implementation • 15 May 2019 • Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz
Benchmark data sets are an indispensable ingredient of the evaluation of graph-based machine learning methods.
Ranked #2 on Graph Classification on IPC-lifted
2 code implementations • 28 Apr 2020 • Cameron Allen, Michael Katz, Tim Klinger, George Konidaris, Matthew Riemer, Gerald Tesauro
Focused macros dramatically improve black-box planning efficiency across a wide range of planning domains, sometimes beating even state-of-the-art planners with access to a full domain model.
no code implementations • 30 Sep 2021 • Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz
In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.
1 code implementation • 28 Oct 2021 • Yoni Choukroun, Michael Katz
Subspace optimization methods have the attractive property of reducing large-scale optimization problems to a sequence of low-dimensional subspace optimization problems.
1 code implementation • 1 Mar 2022 • JunKyu Lee, Michael Katz, Don Joven Agravante, Miao Liu, Geraud Nangue Tasse, Tim Klinger, Shirin Sohrabi
Our approach defines options in hierarchical reinforcement learning (HRL) from AIP operators by establishing a correspondence between the state transition model of AI planning problem and the abstract state transition system of a Markov Decision Process (MDP).
no code implementations • 9 Mar 2022 • Michael Katz, Eli Kravchik
In stream-based active learning, the learning procedure typically has access to a stream of unlabeled data instances and must decide for each instance whether to label it and use it for training or to discard it.
1 code implementation • 18 May 2023 • Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz
We investigate whether LLMs can serve as generalized planners: given a domain and training tasks, generate a program that efficiently produces plans for other tasks in the domain.
no code implementations • 22 Nov 2023 • Turgay Caglar, Sirine Belhaj, Tathagata Chakraborti, Michael Katz, Sarath Sreedharan
This is the first work to look at the application of large language models (LLMs) for the purpose of model space edits in automated planning tasks.
no code implementations • 25 Jan 2024 • Jana Vatter, Ruben Mayer, Hans-Arno Jacobsen, Horst Samulowitz, Michael Katz
Thus, the ability to predict their performance on a given problem is of great importance.
no code implementations • 5 Mar 2024 • Michael Katz, JunKyu Lee, Shirin Sohrabi
We show that task transformations found in the existing literature can be employed for the efficient certification of various top-quality planning problems and propose a novel transformation to efficiently certify loopless top-quality planning.
1 code implementation • 1 Apr 2024 • Michael Katz, JunKyu Lee, Jungkoo Kang, Shirin Sohrabi
The ability to generate multiple plans is central to using planning in real-life applications.
no code implementations • 18 Apr 2024 • Michael Katz, Harsha Kokel, Kavitha Srinivas, Shirin Sohrabi
We analyse the cost of using LLMs for planning and highlight that recent trends are profoundly uneconomical.