no code implementations • 14 Oct 2024 • Ran Wei, Joseph Lee, Shohei Wakayama, Alexander Tschantz, Conor Heins, Christopher Buckley, John Carenbauer, Hari Thiruvengada, Mahault Albarracin, Miguel de Prado, Petter Horling, Peter Winzell, Renjith Rajagopal
Predicting future trajectories of nearby objects, especially under occlusion, is a crucial task in autonomous driving and safe robot navigation.
1 code implementation • 4 Oct 2024 • Toon Van de Maele, Ozan Catal, Alexander Tschantz, Christopher L. Buckley, Tim Verbelen
Recently, 3D Gaussian Splatting has emerged as a promising approach for modeling 3D scenes using mixtures of Gaussians.
1 code implementation • 29 Aug 2024 • Conor Heins, Hao Wu, Dimitrije Markovic, Alexander Tschantz, Jeff Beck, Christopher Buckley
Previous work shows that fast variational methods can reduce the compute requirements of Bayesian methods by eliminating the need for gradient computation or sampling, but are often limited to simple models.
no code implementations • 27 Jul 2024 • Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr
This paper describes a discrete state-space model -- and accompanying methods -- for generative modelling.
no code implementations • 6 Dec 2023 • Karl J. Friston, Tommaso Salvatori, Takuya Isomura, Alexander Tschantz, Alex Kiefer, Tim Verbelen, Magnus Koudahl, Aswin Paul, Thomas Parr, Adeel Razi, Brett Kagan, Christopher L. Buckley, Maxwell J. D. Ramstead
First, we simulate the aforementioned in vitro experiments, in which neuronal cultures spontaneously learn to play Pong, by implementing nested, free energy minimising processes.
no code implementations • 17 Nov 2023 • Karl J. Friston, Lancelot Da Costa, Alexander Tschantz, Alex Kiefer, Tommaso Salvatori, Victorita Neacsu, Magnus Koudahl, Conor Heins, Noor Sajid, Dimitrije Markovic, Thomas Parr, Tim Verbelen, Christopher L Buckley
This paper concerns structure learning or discovery of discrete generative models.
no code implementations • 2 Dec 2022 • Karl J Friston, Maxwell J D Ramstead, Alex B Kiefer, Alexander Tschantz, Christopher L Buckley, Mahault Albarracin, Riddhi J Pitliya, Conor Heins, Brennan Klein, Beren Millidge, Dalton A R Sakthivadivel, Toby St Clere Smithe, Magnus Koudahl, Safae Essafi Tremblay, Capm Petersen, Kaiser Fung, Jason G Fox, Steven Swanson, Dan Mapes, Gabriel René
In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world -- also known as self-evidencing.
1 code implementation • 6 Sep 2022 • Alex B. Kiefer, Beren Millidge, Alexander Tschantz, Christopher L. Buckley
Capsule networks are a neural network architecture specialized for visual scene recognition.
no code implementations • 5 Apr 2022 • Alexander Tschantz, Beren Millidge, Anil K Seth, Christopher L Buckley
This is at odds with evidence that several aspects of visual perception - including complex forms of object recognition - arise from an initial "feedforward sweep" that occurs on fast timescales which preclude substantial recurrent activity.
no code implementations • 18 Jan 2022 • Anil Seth, Tomasz Korbak, Alexander Tschantz
Bruineberg and colleagues helpfully distinguish between instrumental and ontological interpretations of Markov blankets, exposing the dangers of using the former to make claims about the latter.
1 code implementation • 11 Jan 2022 • Conor Heins, Beren Millidge, Daphne Demekas, Brennan Klein, Karl Friston, Iain Couzin, Alexander Tschantz
Active inference is an account of cognition and behavior in complex systems which brings together action, perception, and learning under the theoretical mantle of Bayesian inference.
no code implementations • 3 Dec 2021 • Pablo Lanillos, Cristian Meo, Corrado Pezzato, Ajith Anil Meera, Mohamed Baioumy, Wataru Ohata, Alexander Tschantz, Beren Millidge, Martijn Wisse, Christopher L. Buckley, Jun Tani
Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning.
no code implementations • 24 May 2021 • Miguel Aguilera, Beren Millidge, Alexander Tschantz, Christopher L. Buckley
We discover that two requirements of the FEP -- the Markov blanket condition (i. e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i. e. tendencies driving a system out of equilibrium) -- are only valid for a very narrow space of parameters.
1 code implementation • 19 Feb 2021 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher Buckley
The Kalman filter is a fundamental filtering algorithm that fuses noisy sensory data, a previous state estimate, and a dynamics model to produce a principled estimate of the current state.
1 code implementation • 13 Oct 2020 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley
The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules.
no code implementations • 2 Oct 2020 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors.
1 code implementation • 11 Sep 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley
The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning.
no code implementations • 11 Jul 2020 • Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley
The field of reinforcement learning can be split into model-based and model-free methods.
no code implementations • 23 Jun 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
no code implementations • 13 Jun 2020 • Beren Millidge, Alexander Tschantz, Anil. K. Seth, Christopher L. Buckley
There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline.
1 code implementation • 7 Jun 2020 • Beren Millidge, Alexander Tschantz, Christopher L. Buckley
Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates.
no code implementations • 17 Apr 2020 • Beren Millidge, Alexander Tschantz, Christopher L. Buckley
The Expected Free Energy (EFE) is a central quantity in the theory of active inference.
no code implementations • 28 Feb 2020 • Alexander Tschantz, Beren Millidge, Anil. K. Seth, Christopher L. Buckley
The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards.
no code implementations • 24 Nov 2019 • Alexander Tschantz, Manuel Baltieri, Anil. K. Seth, Christopher L. Buckley
In reinforcement learning (RL), agents often operate in partially observed and uncertain environments.