This review motivates and synthesizes research efforts in neuroscience-inspired artificial intelligence and biomimetic computing in terms of mortal computation.
This paper considers neural representation through the lens of active inference, a normative framework for understanding brain function.
We approach this problem by hierarchical generative modelling equipped with multi-level planning-for autonomous task completion-that mimics the deep temporal architecture of human motor control.
Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century.
Even though the brain operates in pure darkness, within the skull, it can infer the most likely causes of its sensory input.
We present a didactic introduction to spectral Dynamic Causal Modelling (DCM), a Bayesian state-space modelling approach used to infer effective connectivity from non-invasive neuroimaging data.
This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle.
This paper presents a model of consciousness that follows directly from the free-energy principle (FEP).
In part, this has been due to the poor performance of models trained with PC when evaluated by both sample quality and marginal likelihood.
This paper introduces a variational formulation of natural selection, paying special attention to the nature of "things" and the way that different "kinds" of "things" are individuated from - and influence - each other.
Living systems face both environmental complexity and limited access to free-energy resources.
Bistable perception follows from observing a static, ambiguous, (visual) stimulus with two possible interpretations.
We show how any system with morphological degrees of freedom and locally limited free energy will, under the constraints of the free energy principle, evolve toward a neuromorphic morphology that supports hierarchical computations in which each level of the hierarchy enacts a coarse-graining of its inputs, and dually a fine-graining of its outputs.
Simulations evince a complex phase-space structure for these kinds of models; including stationary points and limit cycles and the possibility for bifurcations and transitions among different modes of activity.
In this chapter, we identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.
Active inference is an account of cognition and behavior in complex systems which brings together action, perception, and learning under the theoretical mantle of Bayesian inference.
Conversely, active inference reduces to Bayesian decision theory in the absence of ambiguity and relative risk, i. e., expected utility maximization.
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters.
In this paper, we pursue the notion that this learnt behaviour can be a consequence of reward-free preference learning that ensures an appropriate trade-off between exploration and preference satisfaction.
Here, we introduce a novel method to plan in POMDPs--Active Inference Tree Search (AcT)--that combines the normative character and biological realism of a leading planning theory in neuroscience (Active Inference) and the scalability of tree search methods in AI.
Precisely, we show the conditions under which active inference produces the optimal solution to the Bellman equation--a formulation that underlies several approaches to model-based reinforcement learning and control.
While the narrow objectives correspond to domain-specific rewards as typical in reinforcement learning, the general objectives maximize information with the environment through latent variable models of input sequences.
In a more complex Animal-AI environment, our agents (using the same neural architecture) are able to simulate future state transitions and actions (i. e., plan), to evince reward-directed navigation - despite temporary suspension of visual input.
We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space.
Epileptic seizure activity shows complicated dynamics in both space and time.
In a published paper [Sengupta, 2016], we have proposed that the brain (and other self-organized biological and artificial systems) can be characterized via the mathematical apparatus of a gauge theory.