Search Results for author: Peter Dayan

Found 26 papers, 2 papers with code

Simplicity in Complexity

no code implementations5 Mar 2024 Kevin Shen, Surabhi S Nath, Aenne Brielmann, Peter Dayan

We find that complexity is well-explained by a simple linear model with these two features across six diverse image-sets of naturalistic scene and art images.

Predicting the Future with Simple World Models

no code implementations31 Jan 2024 Tankred Saanum, Peter Dayan, Eric Schulz

Abstracting the dynamics of the environment with simple models can have several benefits.

Video Prediction

The Inner Sentiments of a Thought

no code implementations4 Jul 2023 Chris Gagne, Peter Dayan

Transformer-based large-scale language models (LLMs) are able to generate highly realistic text.

Habits of Mind: Reusing Action Sequences for Efficient Planning

no code implementations8 Jun 2023 Noémi Éltető, Peter Dayan

When we exercise sequences of actions, their execution becomes more fluent and precise.

Chunking

Catastrophe, Compounding & Consistency in Choice

no code implementations12 Nov 2021 Chris Gagne, Peter Dayan

Conditional value-at-risk (CVaR) precisely characterizes the influence that rare, catastrophic events can exert over decisions.

Decision Making

Two steps to risk sensitivity

1 code implementation NeurIPS 2021 Chris Gagne, Peter Dayan

Distributional reinforcement learning (RL) -- in which agents learn about all the possible long-term consequences of their actions, and not just the expected value -- is of great recent interest.

Decision Making Distributional Reinforcement Learning +2

A Local Temporal Difference Code for Distributional Reinforcement Learning

no code implementations NeurIPS 2020 Pablo Tano, Peter Dayan, Alexandre Pouget

Recent theoretical and experimental results suggest that the dopamine system implements distributional temporal difference backups, allowing learning of the entire distributions of the long-run values of states rather than just their expected values.

Distributional Reinforcement Learning Imputation +2

Static and Dynamic Values of Computation in MCTS

no code implementations11 Feb 2020 Eren Sezener, Peter Dayan

Monte-Carlo Tree Search (MCTS) is one of the most-widely used methods for planning, and has powered many recent advances in artificial intelligence.

Disentangled behavioural representations

1 code implementation NeurIPS 2019 Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong

Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space.

Decision Making

Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models

no code implementations NeurIPS 2018 Amir Dezfouli, Richard Morris, Fabio T. Ramos, Peter Dayan, Bernard Balleine

One standard approach to this is model-based fMRI data analysis, in which a model is fitted to the behavioral data, i. e., a subject's choices, and then the neural data are parsed to find brain regions whose BOLD signals are related to the model's internal signals.

Decision Making

Probabilistic Meta-Representations Of Neural Networks

no code implementations1 Oct 2018 Theofanis Karaletsos, Peter Dayan, Zoubin Ghahramani

Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently.

Comparison of Maximum Likelihood and GAN-based training of Real NVPs

no code implementations15 May 2017 Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra, Peter Dayan

We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN.

One-Shot Learning

Bayes-Adaptive Simulation-based Search with Value Function Approximation

no code implementations NeurIPS 2014 Arthur Guez, Nicolas Heess, David Silver, Peter Dayan

Bayes-adaptive planning offers a principled solution to the exploration-exploitation trade-off under model uncertainty.

Better Optimism By Bayes: Adaptive Planning with Rich Models

no code implementations9 Feb 2014 Arthur Guez, David Silver, Peter Dayan

The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling.

Model-based Reinforcement Learning Reinforcement Learning (RL) +1

Correlations strike back (again): the case of associative memory retrieval

no code implementations NeurIPS 2013 Cristina Savin, Peter Dayan, Mate Lengyel

It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population.

Retrieval

Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

no code implementations NeurIPS 2012 Arthur Guez, David Silver, Peter Dayan

Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way.

Model-based Reinforcement Learning reinforcement-learning +1

Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories

no code implementations NeurIPS 2011 Cristina Savin, Peter Dayan, Máté Lengyel

Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items.

Statistical Models of Linear and Nonlinear Contextual Interactions in Early Visual Processing

no code implementations NeurIPS 2009 Ruben Coen-Cagli, Peter Dayan, Odelia Schwartz

A central hypothesis about early visual processing is that it represents inputs in a coordinate system matched to the statistics of natural scenes.

Know Thy Neighbour: A Normative Theory of Synaptic Depression

no code implementations NeurIPS 2009 Jean-Pascal Pfister, Peter Dayan, Máté Lengyel

Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales.

Load and Attentional Bayes

no code implementations NeurIPS 2008 Peter Dayan

Selective attention is a most intensively studied psychological phenomenon, rife with theoretical suggestions and schisms.

Bayesian Model of Behaviour in Economic Games

no code implementations NeurIPS 2008 Debajyoti Ray, Brooks King-Casas, P. R. Montague, Peter Dayan

Classical Game Theoretic approaches that make strong rationality assumptions have difficulty modeling observed behaviour in Economic games of human subjects.

Cannot find the paper you are looking for? You can Submit a new open access paper.