Search Results for author: Paolo Traverso

Found 8 papers, 1 papers with code

Planning for Learning Object Properties

no code implementations15 Jan 2023 Leonardo Lamanna, Luciano Serafini, Mohamadreza Faridghasemnia, Alessandro Saffiotti, Alessandro Saetti, Alfonso Gerevini, Paolo Traverso

Autonomous agents embedded in a physical environment need the ability to recognize objects and their properties from sensory data.

Object

Online Learning of Reusable Abstract Models for Object Goal Navigation

no code implementations CVPR 2022 Tommaso Campari, Leonardo Lamanna, Paolo Traverso, Luciano Serafini, Lamberto Ballan

In this paper, we present a novel approach to incrementally learn an Abstract Model of an unknown environment, and show how an agent can reuse the learned model for tackling the Object Goal Navigation task.

Image Segmentation Object +1

Online Grounding of Symbolic Planning Domains in Unknown Environments

1 code implementation18 Dec 2021 Leonardo Lamanna, Luciano Serafini, Alessandro Saetti, Alfonso Gerevini, Paolo Traverso

If a robotic agent wants to exploit symbolic planning techniques to achieve some goal, it must be able to properly ground an abstract planning domain in the environment in which it operates.

Object

Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models

no code implementations2 Oct 2020 Sunandita Patra, James Mason, Malik Ghallab, Dana Nau, Paolo Traverso

However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation.

Decision Making Descriptive

Incremental learning abstract discrete planning domains and mappings to continuous perceptions

no code implementations16 Oct 2018 Luciano Serafini, Paolo Traverso

Most of the works on planning and learning, e. g., planning by (model based) reinforcement learning, are based on two main assumptions: (i) the set of states of the planning domain is fixed; (ii) the mapping between the observations from the real word and the states is implicitly assumed or learned offline, and it is not part of the planning domain.

Incremental Learning Model-based Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.