We find that this structure, known as a Markov blanket (i. e. a boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows, both required by the FEP, make it challenging to find systems that fulfil the required assumptions.
The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning.
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline.
Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates.
The Bayesian brain hypothesis, predictive processing and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world.
We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent.