no code implementations • 20 Aug 2024 • Marianela Morales, Alberto Pozanco, Giuseppe Canonaco, Sriram Gopalakrishnan, Daniel Borrajo, Manuela Veloso
Most of the work on learning action models focus on learning the actions' dynamics from input plans.
no code implementations • 14 Jun 2024 • Tomas De la Rosa, Sriram Gopalakrishnan, Alberto Pozanco, Zhen Zeng, Daniel Borrajo
Travel planning is a complex task that involves generating a sequence of actions related to visiting places subject to constraints and maximizing some user satisfaction criteria.
no code implementations • 11 Apr 2024 • Giuseppe Canonaco, Leo Ardon, Alberto Pozanco, Daniel Borrajo
The use of Potential Based Reward Shaping (PBRS) has shown great promise in the ongoing research effort to tackle sample inefficiency in Reinforcement Learning (RL).
no code implementations • 15 Feb 2024 • Alberto Pozanco, Daniel Borrajo, Manuela Veloso
In many real-world planning applications, agents might be interested in finding plans whose actions have costs that are as uniform as possible.
no code implementations • 12 Feb 2024 • Alberto Pozanco, Ramon Fraga Pereira, Daniel Borrajo
Most research on planning environment (re)design assumes the interested party's objective is to facilitate the recognition of goals and plans, and search over the space of environment modifications to find the minimal set of changes that simplify those tasks and optimise a particular metric.
no code implementations • 11 Aug 2023 • Parisa Zehtabi, Alberto Pozanco, Ayala Bloch, Daniel Borrajo, Sarit Kraus
We propose CMAoE, a domain-independent approach to obtain contrastive explanations by: (i) generating a new solution $S^\prime$ where property $P$ is enforced, while also minimizing the differences between $S$ and $S^\prime$; and (ii) highlighting the differences between the two solutions, with respect to the features of the objective function of the multi-agent system.
no code implementations • 1 Dec 2022 • Alberto Pozanco, Daniel Borrajo
In cooperative Multi-Agent Planning (MAP), a set of goals has to be achieved by a set of agents.
no code implementations • 28 Nov 2022 • Leo Ardon, Alberto Pozanco, Daniel Borrajo, Sumitra Ganesh
Knowing this information can help reduce the sample complexity of RL algorithms by masking the inapplicable actions from the policy distribution to only explore actions relevant to finding an optimal policy.
no code implementations • 30 Mar 2022 • Alberto Pozanco, Yolanda E-Martín, Susana Fernández, Daniel Borrajo
In competitive environments, commonly agents try to prevent opponents from achieving their goals.
no code implementations • 16 Mar 2022 • Alberto Pozanco, Francesca Mosca, Parisa Zehtabi, Daniele Magazzeni, Sarit Kraus
The EXPRES framework consists of: (i) an explanation generator that, based on a Mixed-Integer Linear Programming model, finds the best set of reasons that can explain an unsatisfied preference; and (ii) an explanation parser, which translates the generated explanations into human interpretable ones.
no code implementations • 27 Feb 2020 • Daniel Ciolek, Nicolás D'Ippolito, Alberto Pozanco, Sebastian Sardina
A planning domain, as any model, is never complete and inevitably makes assumptions on the environment's dynamic.
no code implementations • 27 May 2019 • Robert C. Holte, Ruben Majadas, Alberto Pozanco, Daniel Borrajo
There is broad consensus that this bound is not very accurate, that the actual suboptimality of wA*'s solution is often much less than W times optimal.