Search Results for author: Mike Preuss

Found 31 papers, 4 papers with code

Illuminating the Diversity-Fitness Trade-Off in Black-Box Optimization

1 code implementation29 Aug 2024 Maria Laura Santoni, Elena Raponi, Aneta Neumann, Frank Neumann, Mike Preuss, Carola Doerr

We emphasize that the main goal of our work is not to present a new algorithm but to look at the problem in a more fundamental and theoretically tractable way by asking the question: What trade-off exists between the minimum distance within batches of solutions and the average quality of their fitness?

Benchmarking Diversity

World Models Increase Autonomy in Reinforcement Learning

no code implementations19 Aug 2024 Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat, Edward S. Hu

However, the training process of RL is far from automatic, requiring extensive human effort to reset the agent and environments.

reinforcement-learning Reinforcement Learning +1

Memory Gym: Towards Endless Tasks to Benchmark Memory Capabilities of Agents

1 code implementation29 Sep 2023 Marco Pleines, Matthias Pallasch, Frank Zimmer, Mike Preuss

Memory Gym presents a suite of 2D partially observable environments, namely Mortar Mayhem, Mystery Path, and Searing Spotlights, designed to benchmark memory capabilities in decision-making agents.

Decision Making Deep Reinforcement Learning

Believable Minecraft Settlements by Means of Decentralised Iterative Planning

no code implementations19 Sep 2023 Arthur van der Staaij, Jelmer Prins, Vincent L. Prins, Julian Poelsma, Thera Smit, Matthias Müller-Brockhausen, Mike Preuss

Procedural city generation that focuses on believability and adaptability to random terrain is a difficult challenge in the field of Procedural Content Generation (PCG).

Minecraft

Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning

no code implementations10 Aug 2023 Paula Torren-Peraire, Alan Kai Hassen, Samuel Genheden, Jonas Verhoeven, Djork-Arne Clevert, Mike Preuss, Igor Tetko

Furthermore, we show that the commonly used single-step retrosynthesis benchmark dataset USPTO-50k is insufficient as this evaluation task does not represent model performance and scalability on larger and more diverse datasets.

Retrosynthesis Single-step retrosynthesis

Two-Memory Reinforcement Learning

no code implementations20 Apr 2023 Zhao Yang, Thomas. M. Moerland, Mike Preuss, Aske Plaat

While deep reinforcement learning has shown important empirical success, it tends to learn relatively slow due to slow propagation of rewards information and slow update of parametric neural networks.

Deep Reinforcement Learning reinforcement-learning +2

Mind the Retrosynthesis Gap: Bridging the divide between Single-step and Multi-step Retrosynthesis Prediction

no code implementations12 Dec 2022 Alan Kai Hassen, Paula Torren-Peraire, Samuel Genheden, Jonas Verhoeven, Mike Preuss, Igor Tetko

Retrosynthesis is the task of breaking down a chemical compound recursively step-by-step into molecular precursors until a set of commercially available molecules is found.

Benchmarking Multi-step retrosynthesis +3

First Go, then Post-Explore: the Benefits of Post-Exploration in Intrinsic Motivation

no code implementations6 Dec 2022 Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat

In this paper, we present a clear ablation study of post-exploration in a general intrinsically motivated goal exploration process (IMGEP) framework, that the Go-Explore paper did not show.

continuous-control Continuous Control +1

Continuous Episodic Control

no code implementations28 Nov 2022 Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat

Therefore, this paper introduces Continuous Episodic Control (CEC), a novel non-parametric episodic memory algorithm for sequential decision making in problems with a continuous action space.

continuous-control Continuous Control +5

Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization

no code implementations23 May 2022 Marco Pleines, Matthias Pallasch, Frank Zimmer, Mike Preuss

At first sight it may seem straightforward to use recurrent layers in Deep Reinforcement Learning algorithms to enable agents to make use of memory in the setting of partially observable environments.

Benchmarking Deep Reinforcement Learning

When to Go, and When to Explore: The Benefit of Post-Exploration in Intrinsic Motivation

no code implementations29 Mar 2022 Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat

Go-Explore achieved breakthrough performance on challenging reinforcement learning (RL) tasks with sparse rewards.

Reinforcement Learning (RL)

Reliable validation of Reinforcement Learning Benchmarks

no code implementations2 Mar 2022 Matthias Müller-Brockhausen, Aske Plaat, Mike Preuss

Reinforcement Learning (RL) is one of the most dynamic research areas in Game AI and AI as a whole, and a wide variety of games are used as its prominent test problems.

Benchmarking Data Compression +4

Potential-based Reward Shaping in Sokoban

no code implementations10 Sep 2021 Zhao Yang, Mike Preuss, Aske Plaat

While previous work has investigated the use of expert knowledge to generate potential functions, in this work, we study whether we can use a search algorithm(A*) to automatically generate a potential function for reward shaping in Sokoban, a well-known planning task.

Sokoban

Transfer Learning and Curriculum Learning in Sokoban

no code implementations25 May 2021 Zhao Yang, Mike Preuss, Aske Plaat

In reinforcement learning, learning actions for a behavior policy that can be applied to new environments is still a challenge, especially for tasks that involve much planning.

reinforcement-learning Reinforcement Learning +3

Adaptive Warm-Start MCTS in AlphaZero-like Deep Reinforcement Learning

no code implementations13 May 2021 Hui Wang, Mike Preuss, Aske Plaat

AlphaZero has achieved impressive performance in deep reinforcement learning by utilizing an architecture that combines search and training of a neural network in self-play.

Board Games Deep Reinforcement Learning +2

Applications of Artificial Intelligence in Live Action Role-Playing Games (LARP)

no code implementations25 Aug 2020 Christoph Salge, Emily Short, Mike Preuss, Spyridion Samothrakis, Pieter Spronck

Live Action Role-Playing (LARP) games and similar experiences are becoming a popular game genre.

Tackling Morpion Solitaire with AlphaZero-likeRanked Reward Reinforcement Learning

no code implementations14 Jun 2020 Hui Wang, Mike Preuss, Michael Emmerich, Aske Plaat

A later algorithm, Nested Rollout Policy Adaptation, was able to find a new record of 82 steps, albeit with large computational resources.

Game of Go reinforcement-learning +4

Versatile Black-Box Optimization

no code implementations29 Apr 2020 Jialin Liu, Antoine Moreau, Mike Preuss, Baptiste Roziere, Jeremy Rapin, Fabien Teytaud, Olivier Teytaud

Choosing automatically the right algorithm using problem descriptors is a classical component of combinatorial optimization.

Combinatorial Optimization Evolutionary Algorithms

Warm-Start AlphaZero Self-Play Search Enhancements

no code implementations26 Apr 2020 Hui Wang, Mike Preuss, Aske Plaat

Recently, AlphaZero has achieved landmark results in deep reinforcement learning, by providing a single self-play architecture that learned three different games at super human level.

Board Games Deep Reinforcement Learning +1

A New Challenge: Approaching Tetris Link with AI

no code implementations1 Apr 2020 Matthias Muller-Brockhausen, Mike Preuss, Aske Plaat

This paper focuses on a new game, Tetris Link, a board game that is still lacking any scientific analysis.

Reinforcement Learning

Obstacle Tower Without Human Demonstrations: How Far a Deep Feed-Forward Network Goes with Reinforcement Learning

1 code implementation1 Apr 2020 Marco Pleines, Jenia Jitsev, Mike Preuss, Frank Zimmer

The Obstacle Tower Challenge is the task to master a procedurally generated chain of levels that subsequently get harder to complete.

Deep Reinforcement Learning

Analysis of Hyper-Parameters for Small Games: Iterations or Epochs in Self-Play?

no code implementations12 Mar 2020 Hui Wang, Michael Emmerich, Mike Preuss, Aske Plaat

A secondary result of our experiments concerns the choice of optimization goals, for which we also provide recommendations.

Reinforcement Learning

From Chess and Atari to StarCraft and Beyond: How Game AI is Driving the World of AI

no code implementations24 Feb 2020 Sebastian Risi, Mike Preuss

This paper reviews the field of Game AI, which not only deals with creating agents that can play a certain game, but also with areas as diverse as creating game content automatically, game analytics, or player modelling.

Starcraft

Hyper-Parameter Sweep on AlphaZero General

1 code implementation19 Mar 2019 Hui Wang, Michael Emmerich, Mike Preuss, Aske Plaat

Therefore, in this paper, we choose 12 parameters in AlphaZero and evaluate how these parameters contribute to training.

Game of Go

Learning to Plan Chemical Syntheses

no code implementations14 Aug 2017 Marwin H. S. Segler, Mike Preuss, Mark P. Waller

We anticipate that our method will accelerate drug and materials discovery by assisting chemists to plan better syntheses faster, and by enabling fully automated robot synthesis.

Retrosynthesis

The True Destination of EGO is Multi-local Optimization

no code implementations19 Apr 2017 Simon Wessing, Mike Preuss

Efficient global optimization is a popular algorithm for the optimization of expensive multimodal black-box functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.