Search Results for author: Pietro Mazzaglia

Found 11 papers, 5 papers with code

FOCUS: Object-Centric World Models for Robotics Manipulation

no code implementations5 Jul 2023 Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt

Understanding the world in terms of objects and the possible interplays with them is an important cognition ability, especially in robotics manipulation, where many tasks require robot-object interactions.

Object

Maximum Causal Entropy Inverse Constrained Reinforcement Learning

no code implementations4 May 2023 Mattijs Baert, Pietro Mazzaglia, Sam Leroux, Pieter Simoens

To address this challenge, we propose a novel method that utilizes the principle of maximum causal entropy to learn constraints and an optimal policy that adheres to these constraints, using demonstrations of agents that abide by the constraints.

reinforcement-learning

Object-Centric Scene Representations using Active Inference

no code implementations7 Feb 2023 Toon Van de Maele, Tim Verbelen, Pietro Mazzaglia, Stefano Ferraro, Bart Dhoedt

Representing a scene and its constituent objects from raw sensory data is a core ability for enabling robots to interact with their environment.

Object Scene Understanding

Choreographer: Learning and Adapting Skills in Imagination

1 code implementation23 Nov 2022 Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Alexandre Lacoste, Sai Rajeswar

Unsupervised skill learning aims to learn a rich repertoire of behaviors without external supervision, providing artificial agents with the ability to control and influence the environment.

Unsupervised Reinforcement Learning

Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels

1 code implementation24 Sep 2022 Sai Rajeswar, Pietro Mazzaglia, Tim Verbelen, Alexandre Piché, Bart Dhoedt, Aaron Courville, Alexandre Lacoste

In this work, we study the URLB and propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent, and a task-aware fine-tuning strategy combined with a new proposed hybrid planner, Dyna-MPC, to adapt the agent for downstream tasks.

reinforcement-learning Reinforcement Learning (RL) +1

Disentangling Shape and Pose for Object-Centric Deep Active Inference Models

no code implementations16 Sep 2022 Stefano Ferraro, Toon Van de Maele, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt

Recently, deep learning methods have been proposed to learn a hidden state space structure purely from data, alleviating the experimenter from this tedious design task, but resulting in an entangled, non-interpreteable state space.

Disentanglement

Home Run: Finding Your Way Home by Imagining Trajectories

no code implementations19 Aug 2022 Daria de Tinguy, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt

When studying unconstrained behaviour and allowing mice to leave their cage to navigate a complex labyrinth, the mice exhibit foraging behaviour in the labyrinth searching for rewards, returning to their home cage now and then, e. g. to drink.

Navigate

The Free Energy Principle for Perception and Action: A Deep Learning Perspective

no code implementations13 Jul 2022 Pietro Mazzaglia, Tim Verbelen, Ozan Çatal, Bart Dhoedt

The free energy principle, and its corollary active inference, constitute a bio-inspired theory that assumes biological agents act to remain in a restricted set of preferred states of the world, i. e., they minimize their free energy.

Variational Inference

Contrastive Active Inference

1 code implementation NeurIPS 2021 Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt

In this work, we propose a contrastive objective for active inference that strongly reduces the computational burden in learning the agent's generative model and planning future actions.

reinforcement-learning Reinforcement Learning (RL)

Curiosity-Driven Exploration via Latent Bayesian Surprise

2 code implementations ICLR Workshop SSL-RL 2021 Pietro Mazzaglia, Ozan Catal, Tim Verbelen, Bart Dhoedt

The human intrinsic desire to pursue knowledge, also known as curiosity, is considered essential in the process of skill acquisition.

Cannot find the paper you are looking for? You can Submit a new open access paper.