Active inference body perception and action for humanoid robots

7 Jun 2019  ·  Guillermo Oliver, Pablo Lanillos, Gordon Cheng ·

Providing artificial agents with the same computational models of biological systems is a way to understand how intelligent behaviours may emerge. We present an active inference body perception and action model working for the first time in a humanoid robot. The model relies on the free energy principle proposed for the brain, where both perception and action goal is to minimise the prediction error through gradient descent on the variational free energy bound. The body state (latent variable) is inferred by minimising the difference between the observed (visual and proprioceptive) sensor values and the predicted ones. Simultaneously, the action makes sensory data sampling to better correspond to the prediction made by the inner model. We formalised and implemented the algorithm on the iCub robot and tested in 2D and 3D visual spaces for online adaptation to visual changes, sensory noise and discrepancies between the model and the real robot. We also compared our approach with classical inverse kinematics in a reaching task, analysing the suitability of such a neuroscience-inspired approach for real-world interaction. The algorithm gave the robot adaptive body perception and upper body reaching with head object tracking (toddler-like), and was able to incorporate visual features online (in a closed-loop manner) without increasing the computational complexity. Moreover, our model predicted involuntary actions in the presence of sensorimotor conflicts showing the path for a potential proof of active inference in humans.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here