1 code implementation • 4 Dec 2024 • Francesco Innocenti, Paul Kinghorn, Will Yun-Farmbrough, Miguel de Llanza Varona, Ryan Singh, Christopher L. Buckley
We introduce JPC, a JAX library for training neural networks with Predictive Coding.
1 code implementation • 4 Oct 2024 • Toon Van de Maele, Ozan Catal, Alexander Tschantz, Christopher L. Buckley, Tim Verbelen
Recently, 3D Gaussian Splatting has emerged as a promising approach for modeling 3D scenes using mixtures of Gaussians.
1 code implementation • 21 Sep 2024 • Viet Dung Nguyen, Zhizhuo Yang, Christopher L. Buckley, Alexander Ororbia
Although research has produced promising results demonstrating the utility of active inference (AIF) in Markov decision processes (MDPs), there is relatively less work that builds AIF models in the context of environments and problems that take the form of partially observable Markov decision processes (POMDPs).
no code implementations • 13 Sep 2024 • Miguel de Llanza Varona, Christopher L. Buckley, Beren Millidge
The efficient coding hypothesis claims that organisms seek to maximize the information about the sensory input in an efficient manner.
1 code implementation • 21 Aug 2024 • Francesco Innocenti, El Mehdi Achour, Ryan Singh, Christopher L. Buckley
Based on these and other results, we conjecture that all the saddles of the equilibrated energy are strict.
no code implementations • 6 Dec 2023 • Karl J. Friston, Tommaso Salvatori, Takuya Isomura, Alexander Tschantz, Alex Kiefer, Tim Verbelen, Magnus Koudahl, Aswin Paul, Thomas Parr, Adeel Razi, Brett Kagan, Christopher L. Buckley, Maxwell J. D. Ramstead
First, we simulate the aforementioned in vitro experiments, in which neuronal cultures spontaneously learn to play Pong, by implementing nested, free energy minimising processes.
1 code implementation • 9 Sep 2023 • Alex B. Kiefer, Christopher L. Buckley
Although the latent spaces learned by distinct neural networks are not generally directly comparable, recent work in machine learning has shown that it is possible to use the similarities and differences among latent space vectors to derive "relative representations" with comparable representational power to their "absolute" counterparts, and which are nearly identical across models trained on similar data distributions.
no code implementations • 15 Aug 2023 • Tommaso Salvatori, Ankur Mali, Christopher L. Buckley, Thomas Lukasiewicz, Rajesh P. N. Rao, Karl Friston, Alexander Ororbia
Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century.
no code implementations • 29 May 2023 • Francesco Innocenti, Ryan Singh, Christopher L. Buckley
Predictive coding (PC) is a brain-inspired local learning algorithm that has recently been suggested to provide advantages over backpropagation (BP) in biologically relevant scenarios.
no code implementations • 7 Apr 2023 • Ryan Singh, Christopher L. Buckley
Recently attentional mechanisms have become a dominating architectural choice of machine learning and are the central innovation of Transformers.
1 code implementation • 16 Feb 2023 • Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, Ethan Perez
Language models (LMs) are pretrained to imitate internet text, including content that would violate human preferences if generated by an LM: falsehoods, offensive comments, personally identifiable information, low-quality or buggy code, and more.
1 code implementation • 6 Sep 2022 • Alex B. Kiefer, Beren Millidge, Alexander Tschantz, Christopher L. Buckley
Capsule networks are a neural network architecture specialized for visual scene recognition.
no code implementations • 26 Jul 2022 • Miguel Aguilera, Ángel Poc-López, Conor Heins, Christopher L. Buckley
Bayesian theories of biological and brain function speculate that Markov blankets (a conditional independence separating a system from external states) play a key role for facilitating inference-like behaviour in living systems.
no code implementations • 3 Dec 2021 • Pablo Lanillos, Cristian Meo, Corrado Pezzato, Ajith Anil Meera, Mohamed Baioumy, Wataru Ohata, Alexander Tschantz, Beren Millidge, Martijn Wisse, Christopher L. Buckley, Jun Tani
Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning.
no code implementations • 2 Sep 2021 • Paul F. Kinghorn, Beren Millidge, Christopher L. Buckley
In cognitive science, behaviour is often separated into two types.
no code implementations • 24 May 2021 • Miguel Aguilera, Beren Millidge, Alexander Tschantz, Christopher L. Buckley
We discover that two requirements of the FEP -- the Markov blanket condition (i. e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i. e. tendencies driving a system out of equilibrium) -- are only valid for a very narrow space of parameters.
1 code implementation • 11 Sep 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley
The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning.
no code implementations • 11 Jul 2020 • Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley
The field of reinforcement learning can be split into model-based and model-free methods.
no code implementations • 23 Jun 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
no code implementations • 13 Jun 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley
There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline.
1 code implementation • 7 Jun 2020 • Beren Millidge, Alexander Tschantz, Christopher L. Buckley
Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates.
no code implementations • 17 Apr 2020 • Beren Millidge, Alexander Tschantz, Christopher L. Buckley
The Expected Free Energy (EFE) is a central quantity in the theory of active inference.
no code implementations • 28 Feb 2020 • Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley
The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards.
no code implementations • 24 Nov 2019 • Alexander Tschantz, Manuel Baltieri, Anil. K. Seth, Christopher L. Buckley
In reinforcement learning (RL), agents often operate in partially observed and uncertain environments.
no code implementations • 29 Apr 2019 • Manuel Baltieri, Christopher L. Buckley
The Bayesian brain hypothesis, predictive processing and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world.
no code implementations • 22 Mar 2019 • Manuel Baltieri, Christopher L. Buckley
We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent.
no code implementations • 13 Mar 2015 • Simon McGregor, Manuel Baltieri, Christopher L. Buckley
Research on the so-called "free-energy principle'' (FEP) in cognitive neuroscience is becoming increasingly high-profile.