Search Results for author: Neil Burgess

Found 9 papers, 5 papers with code

Successor-Predecessor Intrinsic Exploration

no code implementations NeurIPS 2023 Changmin Yu, Neil Burgess, Maneesh Sahani, Samuel J. Gershman

Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards.

Atari Games Efficient Exploration +1

Structured Recognition for Generative Models with Explaining Away

1 code implementation12 Sep 2022 Changmin Yu, Hugo Soulat, Neil Burgess, Maneesh Sahani

A key goal of unsupervised learning is to go beyond density estimation and sample generation to reveal the structure inherent within observed data.

Density Estimation Hippocampus +2

SEREN: Knowing When to Explore and When to Exploit

no code implementations30 May 2022 Changmin Yu, David Mguni, Dong Li, Aivar Sootla, Jun Wang, Neil Burgess

Efficient reinforcement learning (RL) involves a trade-off between "exploitative" actions that maximise expected reward and "explorative'" ones that sample unvisited states.

Reinforcement Learning (RL)

Learning State Representations via Retracing in Reinforcement Learning

1 code implementation ICLR 2022 Changmin Yu, Dong Li, Jianye Hao, Jun Wang, Neil Burgess

We propose learning via retracing, a novel self-supervised approach for learning the state representation (and the associated dynamics model) for reinforcement learning tasks.

Continuous Control Model-based Reinforcement Learning +3

Prediction and Generalisation over Directed Actions by Grid Cells

1 code implementation ICLR 2021 Changmin Yu, Timothy E. J. Behrens, Neil Burgess

Knowing how the effects of directed actions generalise to new situations (e. g. moving North, South, East and West, or turning left, right, etc.)

Continuous Control Translation

Coordinated hippocampal-entorhinal replay as structural inference

1 code implementation NeurIPS 2019 Talfan Evans, Neil Burgess

Constructing and maintaining useful representations of sensory experience is essential for reasoning about ones environment.

Probabilistic Successor Representations with Kalman Temporal Differences

no code implementations6 Oct 2019 Jesse P. Geerts, Kimberly L. Stachenfeld, Neil Burgess

The effectiveness of Reinforcement Learning (RL) depends on an animal's ability to assign credit for rewards to the appropriate preceding stimuli.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.