Search Results for author: Christopher L. Buckley

Found 22 papers, 5 papers with code

Active Inference and Intentional Behaviour

no code implementations6 Dec 2023 Karl J. Friston, Tommaso Salvatori, Takuya Isomura, Alexander Tschantz, Alex Kiefer, Tim Verbelen, Magnus Koudahl, Aswin Paul, Thomas Parr, Adeel Razi, Brett Kagan, Christopher L. Buckley, Maxwell J. D. Ramstead

First, we simulate the aforementioned in vitro experiments, in which neuronal cultures spontaneously learn to play Pong, by implementing nested, free energy minimising processes.

Relative representations for cognitive graphs

1 code implementation9 Sep 2023 Alex B. Kiefer, Christopher L. Buckley

Although the latent spaces learned by distinct neural networks are not generally directly comparable, recent work in machine learning has shown that it is possible to use the similarities and differences among latent space vectors to derive "relative representations" with comparable representational power to their "absolute" counterparts, and which are nearly identical across models trained on similar data distributions.

Understanding Predictive Coding as an Adaptive Trust-Region Method

no code implementations29 May 2023 Francesco Innocenti, Ryan Singh, Christopher L. Buckley

Predictive coding (PC) is a brain-inspired local learning algorithm that has recently been suggested to provide advantages over backpropagation (BP) in biologically relevant scenarios.

Attention: Marginal Probability is All You Need?

no code implementations7 Apr 2023 Ryan Singh, Christopher L. Buckley

Recently attentional mechanisms have become a dominating architectural choice of machine learning and are the central innovation of Transformers.

Management

Pretraining Language Models with Human Preferences

1 code implementation16 Feb 2023 Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, Ethan Perez

Language models (LMs) are pretrained to imitate internet text, including content that would violate human preferences if generated by an LM: falsehoods, offensive comments, personally identifiable information, low-quality or buggy code, and more.

Imitation Learning Language Modelling

Capsule Networks as Generative Models

1 code implementation6 Sep 2022 Alex B. Kiefer, Beren Millidge, Alexander Tschantz, Christopher L. Buckley

Capsule networks are a neural network architecture specialized for visual scene recognition.

Scene Recognition

Knitting a Markov blanket is hard when you are out-of-equilibrium: two examples in canonical nonequilibrium models

no code implementations26 Jul 2022 Miguel Aguilera, Ángel Poc-López, Conor Heins, Christopher L. Buckley

Bayesian theories of biological and brain function speculate that Markov blankets (a conditional independence separating a system from external states) play a key role for facilitating inference-like behaviour in living systems.

Active Inference in Robotics and Artificial Agents: Survey and Challenges

no code implementations3 Dec 2021 Pablo Lanillos, Cristian Meo, Corrado Pezzato, Ajith Anil Meera, Mohamed Baioumy, Wataru Ohata, Alexander Tschantz, Beren Millidge, Martijn Wisse, Christopher L. Buckley, Jun Tani

Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning.

Bayesian Inference

How particular is the physics of the free energy principle?

no code implementations24 May 2021 Miguel Aguilera, Beren Millidge, Alexander Tschantz, Christopher L. Buckley

We discover that two requirements of the FEP -- the Markov blanket condition (i. e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i. e. tendencies driving a system out of equilibrium) -- are only valid for a very narrow space of parameters.

Bayesian Inference Variational Inference

Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain

1 code implementation11 Sep 2020 Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning.

On the Relationship Between Active Inference and Control as Inference

no code implementations23 Jun 2020 Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.

Decision Making reinforcement-learning +2

Reinforcement Learning as Iterative and Amortised Inference

no code implementations13 Jun 2020 Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley

There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline.

General Classification reinforcement-learning +1

Predictive Coding Approximates Backprop along Arbitrary Computation Graphs

1 code implementation7 Jun 2020 Beren Millidge, Alexander Tschantz, Christopher L. Buckley

Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates.

BIG-bench Machine Learning

Whence the Expected Free Energy?

no code implementations17 Apr 2020 Beren Millidge, Alexander Tschantz, Christopher L. Buckley

The Expected Free Energy (EFE) is a central quantity in the theory of active inference.

Reinforcement Learning through Active Inference

no code implementations28 Feb 2020 Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley

The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards.

Decision Making reinforcement-learning +1

Scaling active inference

no code implementations24 Nov 2019 Alexander Tschantz, Manuel Baltieri, Anil. K. Seth, Christopher L. Buckley

In reinforcement learning (RL), agents often operate in partially observed and uncertain environments.

Efficient Exploration Reinforcement Learning (RL)

Generative models as parsimonious descriptions of sensorimotor loops

no code implementations29 Apr 2019 Manuel Baltieri, Christopher L. Buckley

The Bayesian brain hypothesis, predictive processing and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world.

Nonmodular architectures of cognitive systems based on active inference

no code implementations22 Mar 2019 Manuel Baltieri, Christopher L. Buckley

We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent.

A Minimal Active Inference Agent

no code implementations13 Mar 2015 Simon McGregor, Manuel Baltieri, Christopher L. Buckley

Research on the so-called "free-energy principle'' (FEP) in cognitive neuroscience is becoming increasingly high-profile.

Cannot find the paper you are looking for? You can Submit a new open access paper.