Search Results for author: Simon Schmitt

Found 11 papers, 5 papers with code

Exploration via Epistemic Value Estimation

no code implementations7 Mar 2023 Simon Schmitt, John Shawe-Taylor, Hado van Hasselt

We propose epistemic value estimation (EVE): a recipe that is compatible with sequential decision making and with neural network function approximators.

Decision Making Efficient Exploration +1

Chaining Value Functions for Off-Policy Learning

no code implementations17 Jan 2022 Simon Schmitt, John Shawe-Taylor, Hado van Hasselt

To accumulate knowledge and improve its policy of behaviour, a reinforcement learning agent can learn `off-policy' about policies that differ from the policy used to generate its experience.

reinforcement-learning Reinforcement Learning (RL)

A Network Control Theory Approach to Longitudinal Symptom Dynamics in Major Depressive Disorder

no code implementations21 Jul 2021 Tim Hahn, Hamidreza Jamalabadi, Daniel Emden, Janik Goltermann, Jan Ernsting, Nils R. Winter, Lukas Fisch, Ramona Leenings, Kelvin Sarink, Vincent Holstein, Marius Gruber, Dominik Grotegerd, Susanne Meinert, Katharina Dohm, Elisabeth J. Leehr, Maike Richter, Lisa Sindermann, Verena Enneking, Hannah Lemke, Stephanie Witt, Marcella Rietschel, Katharina Brosch, Julia-Katharina Pfarr, Tina Meller, Kai Gustav Ringwald, Simon Schmitt, Frederike Stein, Igor Nenadic, Tilo Kircher, Bertram Müller-Myhsok, Till F. M. Andlauer, Jonathan Repple, Udo Dannlowski, Nils Opel

We quantified the theoretical energy required for each patient and time-point to reach a symptom-free state given individual symptom-network topology (E 0 ) and 1) tested if E 0 predicts future symptom improvement and 2) whether this relationship is moderated by Polygenic Risk Scores (PRS) of mental disorders, childhood maltreatment experience, and self-reported resilience.


1 code implementation12 Jun 2020 Jordan Hoffmann, Simon Schmitt, Simon Osindero, Karen Simonyan, Erich Elsen

Neural networks have historically been built layerwise from the set of functions in ${f: \mathbb{R}^n \to \mathbb{R}^m }$, i. e. with activations and weights/parameters represented by real numbers, $\mathbb{R}$.

Image Classification Language Modelling

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

16 code implementations19 Nov 2019 Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent SIfre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver

When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.

Atari Games Atari Games 100k +3

Off-Policy Actor-Critic with Shared Experience Replay

no code implementations ICML 2020 Simon Schmitt, Matteo Hessel, Karen Simonyan

We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: (a) efficient actor-critic learning with experience replay (b) stability of off-policy learning where agents learn from other agents behaviour.

Atari Games

Multi-task Deep Reinforcement Learning with PopArt

2 code implementations12 Sep 2018 Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, Hado van Hasselt

This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on.

Atari Games Multi-Task Learning +2

Kickstarting Deep Reinforcement Learning

no code implementations10 Mar 2018 Simon Schmitt, Jonathan J. Hudson, Augustin Zidek, Simon Osindero, Carl Doersch, Wojciech M. Czarnecki, Joel Z. Leibo, Heinrich Kuttler, Andrew Zisserman, Karen Simonyan, S. M. Ali Eslami

Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.