Search Results for author: Heinrich Küttler

Found 12 papers, 10 papers with code

moolib: A Platform for Distributed RL

1 code implementation26 Jan 2022 Vegard Mella, Eric Hambro, Danielle Rothermel, Heinrich Küttler

Together with the moolib library, we present example user code which shows how moolib’s components can be used to implement common reinforcement learning agents as a simple but scalable distributed network of homogeneous peers.

reinforcement-learning

MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

1 code implementation27 Sep 2021 Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Küttler, Edward Grefenstette, Tim Rocktäschel

By leveraging the full set of entities and environment dynamics from NetHack, one of the richest grid-based video games, MiniHack allows designing custom RL testbeds that are fast and convenient to use.

NetHack reinforcement-learning +1

The NetHack Learning Environment

3 code implementations NeurIPS 2020 Heinrich Küttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, Tim Rocktäschel

Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack.

NetHack Score Systematic Generalization

DeepMind Lab

4 code implementations12 Dec 2016 Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, Stig Petersen

DeepMind Lab is a first-person 3D game platform designed for research and development of general artificial intelligence and machine learning systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.