no code implementations • 5 Mar 2024 • Kevin Shen, Surabhi S Nath, Aenne Brielmann, Peter Dayan
We find that complexity is well-explained by a simple linear model with these two features across six diverse image-sets of naturalistic scene and art images.
no code implementations • 31 Jan 2024 • Tankred Saanum, Peter Dayan, Eric Schulz
Abstracting the dynamics of the environment with simple models can have several benefits.
no code implementations • 4 Jul 2023 • Chris Gagne, Peter Dayan
Transformer-based large-scale language models (LLMs) are able to generate highly realistic text.
no code implementations • 8 Jun 2023 • Noémi Éltető, Peter Dayan
When we exercise sequences of actions, their execution becomes more fluent and precise.
no code implementations • 12 Nov 2021 • Chris Gagne, Peter Dayan
Conditional value-at-risk (CVaR) precisely characterizes the influence that rare, catastrophic events can exert over decisions.
1 code implementation • NeurIPS 2021 • Chris Gagne, Peter Dayan
Distributional reinforcement learning (RL) -- in which agents learn about all the possible long-term consequences of their actions, and not just the expected value -- is of great recent interest.
no code implementations • NeurIPS 2020 • Pablo Tano, Peter Dayan, Alexandre Pouget
Recent theoretical and experimental results suggest that the dopamine system implements distributional temporal difference backups, allowing learning of the entire distributions of the long-run values of states rather than just their expected values.
no code implementations • ICLR 2021 • Sanjeevan Ahilan, Peter Dayan
We consider the problem of learning to communicate using multi-agent reinforcement learning (MARL).
no code implementations • 11 Feb 2020 • Eren Sezener, Peter Dayan
Monte-Carlo Tree Search (MCTS) is one of the most-widely used methods for planning, and has powered many recent advances in artificial intelligence.
1 code implementation • NeurIPS 2019 • Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong
Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space.
no code implementations • 24 Jan 2019 • Sanjeevan Ahilan, Peter Dayan
We investigate how reinforcement learning agents can learn to cooperate.
no code implementations • NeurIPS 2018 • Amir Dezfouli, Richard Morris, Fabio T. Ramos, Peter Dayan, Bernard Balleine
One standard approach to this is model-based fMRI data analysis, in which a model is fitted to the behavioral data, i. e., a subject's choices, and then the neural data are parsed to find brain regions whose BOLD signals are related to the model's internal signals.
no code implementations • 1 Oct 2018 • Theofanis Karaletsos, Peter Dayan, Zoubin Ghahramani
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently.
no code implementations • ICML 2018 • Jack W. Rae, Chris Dyer, Peter Dayan, Timothy P. Lillicrap
Neural networks trained with backpropagation often struggle to identify classes that have been observed a small number of times.
Ranked #68 on Language Modelling on WikiText-103
no code implementations • 15 May 2017 • Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra, Peter Dayan
We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN.
no code implementations • 14 Sep 2016 • Neil R. Bramley, Peter Dayan, Thomas L. Griffiths, David A. Lagnado
Higher-level cognition depends on the ability to learn models of the world.
no code implementations • 12 Feb 2015 • Andreas Hula, P. Read Montague, Peter Dayan
Reciprocating interactions represent a central feature of all human exchanges.
no code implementations • NeurIPS 2014 • Arthur Guez, Nicolas Heess, David Silver, Peter Dayan
Bayes-adaptive planning offers a principled solution to the exploration-exploitation trade-off under model uncertainty.
no code implementations • 9 Feb 2014 • Arthur Guez, David Silver, Peter Dayan
The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling.
Model-based Reinforcement Learning Reinforcement Learning (RL) +1
no code implementations • NeurIPS 2013 • Cristina Savin, Peter Dayan, Mate Lengyel
It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population.
no code implementations • NeurIPS 2012 • Arthur Guez, David Silver, Peter Dayan
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • NeurIPS 2011 • Cristina Savin, Peter Dayan, Máté Lengyel
Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items.
no code implementations • NeurIPS 2009 • Ruben Coen-Cagli, Peter Dayan, Odelia Schwartz
A central hypothesis about early visual processing is that it represents inputs in a coordinate system matched to the statistics of natural scenes.
no code implementations • NeurIPS 2009 • Jean-Pascal Pfister, Peter Dayan, Máté Lengyel
Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales.
no code implementations • NeurIPS 2008 • Peter Dayan
Selective attention is a most intensively studied psychological phenomenon, rife with theoretical suggestions and schisms.
no code implementations • NeurIPS 2008 • Debajyoti Ray, Brooks King-Casas, P. R. Montague, Peter Dayan
Classical Game Theoretic approaches that make strong rationality assumptions have difficulty modeling observed behaviour in Economic games of human subjects.