Search Results for author: Beren Millidge

Found 21 papers, 8 papers with code

pymdp: A Python library for active inference in discrete state spaces

1 code implementation11 Jan 2022 Conor Heins, Beren Millidge, Daphne Demekas, Brennan Klein, Karl Friston, Iain Couzin, Alexander Tschantz

Active inference is an account of cognition and behavior in complex systems which brings together action, perception, and learning under the theoretical mantle of Bayesian inference.

Bayesian Inference

Active Inference in Robotics and Artificial Agents: Survey and Challenges

no code implementations3 Dec 2021 Pablo Lanillos, Cristian Meo, Corrado Pezzato, Ajith Anil Meera, Mohamed Baioumy, Wataru Ohata, Alexander Tschantz, Beren Millidge, Martijn Wisse, Christopher L. Buckley, Jun Tani

Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning.

Bayesian Inference

A Mathematical Walkthrough and Discussion of the Free Energy Principle

no code implementations30 Aug 2021 Beren Millidge, Anil Seth, Christopher L Buckley

The Free-Energy-Principle (FEP) is an influential and controversial theory which postulates a deep and powerful connection between the stochastic thermodynamics of self-organization and learning through variational inference.

Bayesian Inference Variational Inference

Predictive Coding: a Theoretical and Experimental Review

no code implementations27 Jul 2021 Beren Millidge, Anil Seth, Christopher L Buckley

Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world.

Applications of the Free Energy Principle to Machine Learning and Neuroscience

no code implementations30 Jun 2021 Beren Millidge

Firstly, we focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle which argues that the primary function of the brain is to minimize prediction errors, showing how predictive coding can be scaled up and extended to be more biologically plausible, and elucidating its close links with other methods such as Kalman Filtering.

Bayesian Inference

Online reinforcement learning with sparse rewards through an active inference capsule

1 code implementation4 Jun 2021 Alejandro Daniel Noel, Charel van Hoof, Beren Millidge

Our model is capable of solving sparse-reward problems with a very high sample efficiency due to its objective function, which encourages directed exploration of uncertain states.

Offline RL

Towards a Mathematical Theory of Abstraction

no code implementations3 Jun 2021 Beren Millidge

We provide a precise characterisation of what an abstraction is and, perhaps more importantly, suggest how abstractions can be learnt directly from data both for static datasets and for dynamical systems.

How particular is the physics of the free energy principle?

no code implementations24 May 2021 Miguel Aguilera, Beren Millidge, Alexander Tschantz, Christopher L. Buckley

We discover that two requirements of the FEP -- the Markov blanket condition (i. e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i. e. tendencies driving a system out of equilibrium) -- are only valid for a very narrow space of parameters.

Bayesian Inference Variational Inference

Understanding the Origin of Information-Seeking Exploration in Probabilistic Objectives for Control

1 code implementation11 Mar 2021 Beren Millidge, Anil Seth, Christopher Buckley

We propose a dichotomy in the objective functions underlying adaptive behaviour between \emph{evidence} objectives, which correspond to well-known reward or utility maximizing objectives in the literature, and \emph{divergence} objectives which instead seek to minimize the divergence between the agent's expected and desired futures, and argue that this new class of divergence objectives could form the mathematical foundation for a much richer understanding of the exploratory components of adaptive and intelligent action, beyond simply greedy utility maximization.

Information Seeking

Neural Kalman Filtering

1 code implementation19 Feb 2021 Beren Millidge, Alexander Tschantz, Anil Seth, Christopher Buckley

The Kalman filter is a fundamental filtering algorithm that fuses noisy sensory data, a previous state estimate, and a dynamics model to produce a principled estimate of the current state.

Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm

1 code implementation13 Oct 2020 Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley

The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules.

Relaxing the Constraints on Predictive Coding Models

no code implementations2 Oct 2020 Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley

Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors.

Variational Inference

Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain

1 code implementation11 Sep 2020 Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning.

On the Relationship Between Active Inference and Control as Inference

no code implementations23 Jun 2020 Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.

Decision Making Variational Inference

Reinforcement Learning as Iterative and Amortised Inference

no code implementations13 Jun 2020 Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline.

General Classification

Predictive Coding Approximates Backprop along Arbitrary Computation Graphs

1 code implementation7 Jun 2020 Beren Millidge, Alexander Tschantz, Christopher L. Buckley

Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates.

Whence the Expected Free Energy?

no code implementations17 Apr 2020 Beren Millidge, Alexander Tschantz, Christopher L. Buckley

The Expected Free Energy (EFE) is a central quantity in the theory of active inference.

Reinforcement Learning through Active Inference

no code implementations28 Feb 2020 Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley

The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards.

Decision Making

Deep Active Inference as Variational Policy Gradients

2 code implementations8 Jul 2019 Beren Millidge

Active Inference is a theory of action arising from neuroscience which casts action and planning as a bayesian inference problem to be solved by minimizing a single quantity - the variational free energy.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.