no code implementations • 18 Jan 2023 • Michael Kölle, Tim Matheis, Philipp Altmann, Kyrill Schmid
Enabling autonomous agents to act cooperatively is an important step to integrate artificial intelligence in our daily lives.
no code implementations • 15 Jul 2022 • Kyrill Schmid, Lenz Belzner, Robert Müller, Johannes Tochtermann, Claudia Linnhoff-Popien
Some of the most relevant future applications of multi-agent systems like autonomous driving or factories as a service display mixed-motive scenarios, where agents might have conflicting goals.
no code implementations • 5 Jul 2022 • Michael Kölle, Lennart Rietdorf, Kyrill Schmid
In this environment, reinforcement learning agents learn to trade successfully.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 22 Sep 2021 • Tobias Müller, Christoph Roch, Kyrill Schmid, Philipp Altmann
Reinforcement learning has driven impressive advances in machine learning.
BIG-bench Machine Learning Multi-agent Reinforcement Learning +2
no code implementations • 22 Sep 2021 • Tobias Müller, Kyrill Schmid, Daniëlle Schuman, Thomas Gabor, Markus Friedrich, Marc Geitz
The expansion of Fiber-To-The-Home (FTTH) networks creates high costs due to expensive excavation procedures.
no code implementations • 11 Dec 2020 • Robert Müller, Steffen Illium, Fabian Ritz, Kyrill Schmid
In this work, we thoroughly evaluate the efficacy of pretrained neural networks as feature extractors for anomalous sound detection.
no code implementations • 5 Aug 2019 • Stefan Langer, Robert Müller, Kyrill Schmid, Claudia Linnhoff-Popien
The difficulty of mountainbike downhill trails is a subjective perception.
1 code implementation • 10 May 2019 • Thomy Phan, Lenz Belzner, Marie Kiermeier, Markus Friedrich, Kyrill Schmid, Claudia Linnhoff-Popien
State-of-the-art approaches to partially observable planning like POMCP are based on stochastic tree search.
no code implementations • 25 Jan 2019 • Thomy Phan, Kyrill Schmid, Lenz Belzner, Thomas Gabor, Sebastian Feld, Claudia Linnhoff-Popien
We experimentally evaluate STEP in two challenging and stochastic domains with large state and joint action spaces and show that STEP is able to learn stronger policies than standard multi-agent reinforcement learning algorithms, when combining multi-agent open-loop planning with centralized function approximation.
no code implementations • 30 Oct 2018 • Thomas Gabor, Lenz Belzner, Thomy Phan, Kyrill Schmid
As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function.