Search Results for author: Alberto Pozanco

Found 12 papers, 0 papers with code

On Learning Action Costs from Input Plans

no code implementations20 Aug 2024 Marianela Morales, Alberto Pozanco, Giuseppe Canonaco, Sriram Gopalakrishnan, Daniel Borrajo, Manuela Veloso

Most of the work on learning action models focus on learning the actions' dynamics from input plans.

valid

TRIP-PAL: Travel Planning with Guarantees by Combining Large Language Models and Automated Planners

no code implementations14 Jun 2024 Tomas De la Rosa, Sriram Gopalakrishnan, Alberto Pozanco, Zhen Zeng, Daniel Borrajo

Travel planning is a complex task that involves generating a sequence of actions related to visiting places subject to constraints and maximizing some user satisfaction criteria.

Language Modelling Large Language Model +1

On the Sample Efficiency of Abstractions and Potential-Based Reward Shaping in Reinforcement Learning

no code implementations11 Apr 2024 Giuseppe Canonaco, Leo Ardon, Alberto Pozanco, Daniel Borrajo

The use of Potential Based Reward Shaping (PBRS) has shown great promise in the ongoing research effort to tackle sample inefficiency in Reinforcement Learning (RL).

Reinforcement Learning (RL)

On Computing Plans with Uniform Action Costs

no code implementations15 Feb 2024 Alberto Pozanco, Daniel Borrajo, Manuela Veloso

In many real-world planning applications, agents might be interested in finding plans whose actions have costs that are as uniform as possible.

Generalising Planning Environment Redesign

no code implementations12 Feb 2024 Alberto Pozanco, Ramon Fraga Pereira, Daniel Borrajo

Most research on planning environment (re)design assumes the interested party's objective is to facilitate the recognition of goals and plans, and search over the space of environment modifications to find the minimal set of changes that simplify those tasks and optimise a particular metric.

Contrastive Explanations of Centralized Multi-agent Optimization Solutions

no code implementations11 Aug 2023 Parisa Zehtabi, Alberto Pozanco, Ayala Bloch, Daniel Borrajo, Sarit Kraus

We propose CMAoE, a domain-independent approach to obtain contrastive explanations by: (i) generating a new solution $S^\prime$ where property $P$ is enforced, while also minimizing the differences between $S$ and $S^\prime$; and (ii) highlighting the differences between the two solutions, with respect to the features of the objective function of the multi-agent system.

Fairness in Multi-Agent Planning

no code implementations1 Dec 2022 Alberto Pozanco, Daniel Borrajo

In cooperative Multi-Agent Planning (MAP), a set of goals has to be achieved by a set of agents.

Fairness

Inapplicable Actions Learning for Knowledge Transfer in Reinforcement Learning

no code implementations28 Nov 2022 Leo Ardon, Alberto Pozanco, Daniel Borrajo, Sumitra Ganesh

Knowing this information can help reduce the sample complexity of RL algorithms by masking the inapplicable actions from the policy distribution to only explore actions relevant to finding an optimal policy.

reinforcement-learning Reinforcement Learning +2

Anticipatory Counterplanning

no code implementations30 Mar 2022 Alberto Pozanco, Yolanda E-Martín, Susana Fernández, Daniel Borrajo

In competitive environments, commonly agents try to prevent opponents from achieving their goals.

Explaining Preference-driven Schedules: the EXPRES Framework

no code implementations16 Mar 2022 Alberto Pozanco, Francesca Mosca, Parisa Zehtabi, Daniele Magazzeni, Sarit Kraus

The EXPRES framework consists of: (i) an explanation generator that, based on a Mixed-Integer Linear Programming model, finds the best set of reasons that can explain an unsatisfied preference; and (ii) an explanation parser, which translates the generated explanations into human interpretable ones.

Scheduling

Multi-tier Automated Planning for Adaptive Behavior (Extended Version)

no code implementations27 Feb 2020 Daniel Ciolek, Nicolás D'Ippolito, Alberto Pozanco, Sebastian Sardina

A planning domain, as any model, is never complete and inevitably makes assumptions on the environment's dynamic.

Fairness

Error Analysis and Correction for Weighted A*'s Suboptimality (Extended Version)

no code implementations27 May 2019 Robert C. Holte, Ruben Majadas, Alberto Pozanco, Daniel Borrajo

There is broad consensus that this bound is not very accurate, that the actual suboptimality of wA*'s solution is often much less than W times optimal.

Cannot find the paper you are looking for? You can Submit a new open access paper.