Search Results for author: Lancelot Da Costa

Found 7 papers, 0 papers with code

Geometric Methods for Sampling, Optimisation, Inference and Adaptive Agents

no code implementations20 Mar 2022 Alessandro Barp, Lancelot Da Costa, Guilherme França, Karl Friston, Mark Girolami, Michael I. Jordan, Grigorios A. Pavliotis

In this chapter, we identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.

Decision Making

Branching Time Active Inference: the theory and its generality

no code implementations22 Nov 2021 Théophile Champion, Lancelot Da Costa, Howard Bowman, Marek Grześ

In this paper, we present an alternative framework that aims to unify tree search and active inference by casting planning as a structure learning problem.

Active inference, Bayesian optimal design, and expected utility

no code implementations21 Sep 2021 Noor Sajid, Lancelot Da Costa, Thomas Parr, Karl Friston

Conversely, active inference reduces to Bayesian decision theory in the absence of ambiguity and relative risk, i. e., expected utility maximization.

Bayesian brains and the Rényi divergence

no code implementations12 Jul 2021 Noor Sajid, Francesco Faccio, Lancelot Da Costa, Thomas Parr, Jürgen Schmidhuber, Karl Friston

Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters.

Bayesian Inference Variational Inference

The relationship between dynamic programming and active inference: the discrete, finite-horizon case

no code implementations17 Sep 2020 Lancelot Da Costa, Noor Sajid, Thomas Parr, Karl Friston, Ryan Smith

In this paper, we consider the relation between active inference and dynamic programming under the Bellman equation, which underlies many approaches to reinforcement learning and control.

Decision Making reinforcement-learning

Sophisticated Inference

no code implementations7 Jun 2020 Karl Friston, Lancelot Da Costa, Danijar Hafner, Casper Hesp, Thomas Parr

In this paper, we consider a sophisticated kind of active inference, using a recursive form of expected free energy.

Active Learning

Neural dynamics under active inference: plausibility and efficiency of information processing

no code implementations22 Jan 2020 Lancelot Da Costa, Thomas Parr, Biswa Sengupta, Karl Friston

We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space.

Cannot find the paper you are looking for? You can Submit a new open access paper.