no code implementations • 21 Sep 2021 • Noor Sajid, Lancelot Da Costa, Thomas Parr, Karl Friston
Conversely, active inference reduces to Bayesian decision theory in the absence of ambiguity and relative risk, i. e., expected utility maximization.
no code implementations • 12 Jul 2021 • Noor Sajid, Francesco Faccio, Lancelot Da Costa, Thomas Parr, Jürgen Schmidhuber, Karl Friston
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters.
no code implementations • 17 Sep 2020 • Lancelot Da Costa, Noor Sajid, Thomas Parr, Karl Friston, Ryan Smith
In this paper, we consider the relation between active inference and dynamic programming under the Bellman equation, which underlies many approaches to reinforcement learning and control.
1 code implementation • 3 Sep 2020 • Danijar Hafner, Pedro A. Ortega, Jimmy Ba, Thomas Parr, Karl Friston, Nicolas Heess
While the narrow objectives correspond to domain-specific rewards as typical in reinforcement learning, the general objectives maximize information with the environment through latent variable models of input sequences.
no code implementations • 7 Jun 2020 • Karl Friston, Lancelot Da Costa, Danijar Hafner, Casper Hesp, Thomas Parr
In this paper, we consider a sophisticated kind of active inference, using a recursive form of expected free energy.
no code implementations • 9 Apr 2020 • Karl J. Friston, Thomas Parr, Peter Zeidman, Adeel Razi, Guillaume Flandin, Jean Daunizeau, Oliver J. Hulme, Alexander J. Billig, Vladimir Litvak, Rosalyn J. Moran, Cathy J. Price, Christian Lambert
This technical report describes a dynamic causal model of the spread of coronavirus through a population.
no code implementations • 22 Jan 2020 • Lancelot Da Costa, Thomas Parr, Biswa Sengupta, Karl Friston
We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space.
1 code implementation • 24 Sep 2019 • Noor Sajid, Philip J. Ball, Thomas Parr, Karl J. Friston
In this paper, we provide: 1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in RL; 2) an explicit discrete-state comparison between active inference and RL on an OpenAI gym baseline.