Search Results for author: Matthew Marquez

Found 6 papers, 2 papers with code

GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems

no code implementations19 Oct 2023 Kaya Stechly, Matthew Marquez, Subbarao Kambhampati

The study seems to indicate that (i) LLMs are bad at solving graph coloring instances (ii) they are no better at verifying a solution--and thus are not effective in iterative modes with LLMs critiquing LLM-generated solutions (iii) the correctness and content of the criticisms--whether by LLMs or external solvers--seems largely irrelevant to the performance of iterative prompting.

Scheduling

Can Large Language Models Really Improve by Self-critiquing Their Own Plans?

no code implementations12 Oct 2023 Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati

We evaluate a planning system that employs LLMs for both plan generation and verification.

On the Planning Abilities of Large Language Models : A Critical Investigation

2 code implementations25 May 2023 Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, Subbarao Kambhampati

We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs in LLM-Modulo settings where they act as a source of heuristic guidance for external planners and verifiers.

Cannot find the paper you are looking for? You can Submit a new open access paper.