no code implementations • 15 Jan 2023 • Leonardo Lamanna, Luciano Serafini, Mohamadreza Faridghasemnia, Alessandro Saffiotti, Alessandro Saetti, Alfonso Gerevini, Paolo Traverso
Autonomous agents embedded in a physical environment need the ability to recognize objects and their properties from sensory data.
no code implementations • CVPR 2022 • Tommaso Campari, Leonardo Lamanna, Paolo Traverso, Luciano Serafini, Lamberto Ballan
In this paper, we present a novel approach to incrementally learn an Abstract Model of an unknown environment, and show how an agent can reuse the learned model for tackling the Object Goal Navigation task.
no code implementations • 29 Dec 2021 • Luciano Serafini, Raul Barbosa, Jasmin Grosinger, Luca Iocchi, Christian Napoli, Salvatore Rinzivillo, Jacques Robin, Alessandro Saffiotti, Teresa Scantamburlo, Peter Schueller, Paolo Traverso, Javier Vazquez-Salceda
The burgeoning of AI has prompted recommendations that AI techniques should be "human-centered".
1 code implementation • 18 Dec 2021 • Leonardo Lamanna, Luciano Serafini, Alessandro Saetti, Alfonso Gerevini, Paolo Traverso
If a robotic agent wants to exploit symbolic planning techniques to achieve some goal, it must be able to properly ground an abstract planning domain in the environment in which it operates.
no code implementations • 2 Oct 2020 • Sunandita Patra, James Mason, Malik Ghallab, Dana Nau, Paolo Traverso
However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation.
no code implementations • 9 Mar 2020 • Sunandita Patra, James Mason, Amit Kumar, Malik Ghallab, Paolo Traverso, Dana Nau
We present new planning and learning algorithms for RAE, the Refinement Acting Engine.
no code implementations • 14 Mar 2019 • Luciano Serafini, Paolo Traverso
We propose a framework for learning discrete deterministic planning domains.
no code implementations • 16 Oct 2018 • Luciano Serafini, Paolo Traverso
Most of the works on planning and learning, e. g., planning by (model based) reinforcement learning, are based on two main assumptions: (i) the set of states of the planning domain is fixed; (ii) the mapping between the observations from the real word and the states is implicitly assumed or learned offline, and it is not part of the planning domain.