Search Results for author: Noor Sajid

Found 8 papers, 3 papers with code

Active inference, Bayesian optimal design, and expected utility

no code implementations21 Sep 2021 Noor Sajid, Lancelot Da Costa, Thomas Parr, Karl Friston

Conversely, active inference reduces to Bayesian decision theory in the absence of ambiguity and relative risk, i. e., expected utility maximization.

Active Inference for Stochastic Control

1 code implementation27 Aug 2021 Aswin Paul, Noor Sajid, Manoj Gopalkrishnan, Adeel Razi

Active inference has emerged as an alternative approach to control problems given its intuitive (probabilistic) formalism.

reinforcement-learning

Bayesian brains and the Rényi divergence

no code implementations12 Jul 2021 Noor Sajid, Francesco Faccio, Lancelot Da Costa, Thomas Parr, Jürgen Schmidhuber, Karl Friston

Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters.

Bayesian Inference Variational Inference

Exploration and preference satisfaction trade-off in reward-free learning

no code implementations ICML Workshop URL 2021 Noor Sajid, Panagiotis Tigas, Alexey Zakharov, Zafeirios Fountas, Karl Friston

In this paper, we pursue the notion that this learnt behaviour can be a consequence of reward-free preference learning that ensures an appropriate trade-off between exploration and preference satisfaction.

OpenAI Gym

The relationship between dynamic programming and active inference: the discrete, finite-horizon case

no code implementations17 Sep 2020 Lancelot Da Costa, Noor Sajid, Thomas Parr, Karl Friston, Ryan Smith

In this paper, we consider the relation between active inference and dynamic programming under the Bellman equation, which underlies many approaches to reinforcement learning and control.

Decision Making reinforcement-learning

Deep active inference agents using Monte-Carlo methods

1 code implementation NeurIPS 2020 Zafeirios Fountas, Noor Sajid, Pedro A. M. Mediano, Karl Friston

In a more complex Animal-AI environment, our agents (using the same neural architecture) are able to simulate future state transitions and actions (i. e., plan), to evince reward-directed navigation - despite temporary suspension of visual input.

Active inference: demystified and compared

1 code implementation24 Sep 2019 Noor Sajid, Philip J. Ball, Thomas Parr, Karl J. Friston

In this paper, we provide: 1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in RL; 2) an explicit discrete-state comparison between active inference and RL on an OpenAI gym baseline.

Atari Games OpenAI Gym +1

Cannot find the paper you are looking for? You can Submit a new open access paper.