You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 11 Jan 2022 • Conor Heins, Beren Millidge, Daphne Demekas, Brennan Klein, Karl Friston, Iain Couzin, Alexander Tschantz

Active inference is an account of cognition and behavior in complex systems which brings together action, perception, and learning under the theoretical mantle of Bayesian inference.

no code implementations • 3 Dec 2021 • Pablo Lanillos, Cristian Meo, Corrado Pezzato, Ajith Anil Meera, Mohamed Baioumy, Wataru Ohata, Alexander Tschantz, Beren Millidge, Martijn Wisse, Christopher L. Buckley, Jun Tani

Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning.

no code implementations • 2 Sep 2021 • Paul F. Kinghorn, Beren Millidge, Christopher L. Buckley

In cognitive science, behaviour is often separated into two types.

no code implementations • 30 Aug 2021 • Beren Millidge, Anil Seth, Christopher L Buckley

The Free-Energy-Principle (FEP) is an influential and controversial theory which postulates a deep and powerful connection between the stochastic thermodynamics of self-organization and learning through variational inference.

no code implementations • 27 Jul 2021 • Beren Millidge, Anil Seth, Christopher L Buckley

Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world.

no code implementations • 30 Jun 2021 • Beren Millidge

Firstly, we focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle which argues that the primary function of the brain is to minimize prediction errors, showing how predictive coding can be scaled up and extended to be more biologically plausible, and elucidating its close links with other methods such as Kalman Filtering.

1 code implementation • 4 Jun 2021 • Alejandro Daniel Noel, Charel van Hoof, Beren Millidge

Our model is capable of solving sparse-reward problems with a very high sample efficiency due to its objective function, which encourages directed exploration of uncertain states.

no code implementations • 3 Jun 2021 • Beren Millidge

We provide a precise characterisation of what an abstraction is and, perhaps more importantly, suggest how abstractions can be learnt directly from data both for static datasets and for dynamical systems.

no code implementations • 24 May 2021 • Miguel Aguilera, Beren Millidge, Alexander Tschantz, Christopher L. Buckley

We discover that two requirements of the FEP -- the Markov blanket condition (i. e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i. e. tendencies driving a system out of equilibrium) -- are only valid for a very narrow space of parameters.

1 code implementation • 11 Mar 2021 • Beren Millidge, Anil Seth, Christopher Buckley

We propose a dichotomy in the objective functions underlying adaptive behaviour between \emph{evidence} objectives, which correspond to well-known reward or utility maximizing objectives in the literature, and \emph{divergence} objectives which instead seek to minimize the divergence between the agent's expected and desired futures, and argue that this new class of divergence objectives could form the mathematical foundation for a much richer understanding of the exploratory components of adaptive and intelligent action, beyond simply greedy utility maximization.

1 code implementation • 19 Feb 2021 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher Buckley

The Kalman filter is a fundamental filtering algorithm that fuses noisy sensory data, a previous state estimate, and a dynamics model to produce a principled estimate of the current state.

1 code implementation • 13 Oct 2020 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley

The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules.

no code implementations • 2 Oct 2020 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley

Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors.

1 code implementation • 11 Sep 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning.

no code implementations • 11 Jul 2020 • Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley

The field of reinforcement learning can be split into model-based and model-free methods.

no code implementations • 23 Jun 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.

no code implementations • 13 Jun 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline.

1 code implementation • 7 Jun 2020 • Beren Millidge, Alexander Tschantz, Christopher L. Buckley

Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates.

no code implementations • 17 Apr 2020 • Beren Millidge, Alexander Tschantz, Christopher L. Buckley

The Expected Free Energy (EFE) is a central quantity in the theory of active inference.

no code implementations • 28 Feb 2020 • Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley

The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards.

2 code implementations • 8 Jul 2019 • Beren Millidge

Active Inference is a theory of action arising from neuroscience which casts action and planning as a bayesian inference problem to be solved by minimizing a single quantity - the variational free energy.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.