no code implementations • 4 Apr 2024 • Philipp Altmann, Céline Davignon, Maximilian Zorn, Fabian Ritz, Claudia Linnhoff-Popien, Thomas Gabor
To enhance the interpretability of Reinforcement Learning (RL), we propose Revealing Evolutionary Action Consequence Trajectories (REACT).
no code implementations • 13 Jan 2024 • Michael Kölle, Tom Schubert, Philipp Altmann, Maximilian Zorn, Jonas Stein, Claudia Linnhoff-Popien
With recent advancements in quantum computing technology, optimizing quantum circuits and ensuring reliable quantum state preparation have become increasingly vital.
no code implementations • 13 Jan 2024 • Michael Kölle, Gerhard Stenzel, Jonas Stein, Sebastian Zielinski, Björn Ommer, Claudia Linnhoff-Popien
In recent years, machine learning models like DALL-E, Craiyon, and Stable Diffusion have gained significant attention for their ability to generate high-resolution images from concise descriptions.
no code implementations • 13 Jan 2024 • Michael Kölle, Mohamad Hgog, Fabian Ritz, Philipp Altmann, Maximilian Zorn, Jonas Stein, Claudia Linnhoff-Popien
In this work, we propose a novel quantum reinforcement learning approach that combines the Advantage Actor-Critic algorithm with variational quantum circuits by substituting parts of the classical components.
1 code implementation • 13 Jan 2024 • Michael Kölle, Yannick Erpelding, Fabian Ritz, Thomy Phan, Steffen Illium, Claudia Linnhoff-Popien
Recent advances in Multi-Agent Reinforcement Learning have prompted the modeling of intricate interactions between agents in simulated environments.
no code implementations • 7 Jan 2024 • Robert Müller, Hasan Turalic, Thomy Phan, Michael Kölle, Jonas Nüßlein, Claudia Linnhoff-Popien
In the realm of Multi-Agent Reinforcement Learning (MARL), prevailing approaches exhibit shortcomings in aligning with human learning, robustness, and scalability.
1 code implementation • 18 Dec 2023 • Philipp Altmann, Jonas Stein, Michael Kölle, Adelina Bärligea, Thomas Gabor, Thomy Phan, Sebastian Feld, Claudia Linnhoff-Popien
Quantum computing (QC) in the current NISQ era is still limited in size and precision.
no code implementations • 14 Dec 2023 • Michael Kölle, Afrae Ahouzi, Pascal Debus, Robert Müller, Danielle Schuman, Claudia Linnhoff-Popien
Quantum computing, with its potential to enhance various machine learning tasks, allows significant advancements in kernel calculation and model precision.
no code implementations • 9 Dec 2023 • Jonas Stein, Navid Roshani, Maximilian Zorn, Philipp Altmann, Michael Kölle, Claudia Linnhoff-Popien
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs).
no code implementations • 27 Nov 2023 • Daniëlle Schuman, Leo Sünkel, Philipp Altmann, Jonas Stein, Christoph Roch, Thomas Gabor, Claudia Linnhoff-Popien
Quantum Transfer Learning (QTL) recently gained popularity as a hybrid quantum-classical approach for image classification tasks by efficiently combining the feature extraction capabilities of large Convolutional Neural Networks with the potential benefits of Quantum Machine Learning (QML).
no code implementations • 9 Nov 2023 • Michael Kölle, Jonas Maurer, Philipp Altmann, Leo Sünkel, Jonas Stein, Claudia Linnhoff-Popien
We propose a novel hybrid architecture: instead of utilizing a pre-trained network for compression, we employ an autoencoder to derive a compressed version of the input data.
no code implementations • 9 Nov 2023 • Michael Kölle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas Nüßlein, Claudia Linnhoff-Popien
We showed that our Variational Quantum Circuit approaches perform significantly better compared to a neural network with a similar amount of trainable parameters.
1 code implementation • 20 Jul 2023 • Jonas Stein, Ivo Christ, Nicolas Kraus, Maximilian Balthasar Mansky, Robert Müller, Claudia Linnhoff-Popien
As an application domain where the slightest qualitative improvements can yield immense value, finance is a promising candidate for early quantum advantage.
no code implementations • 28 Jun 2023 • Michael Kölle, Steffen Illium, Maximilian Zorn, Jonas Nüßlein, Patrick Suchostawski, Claudia Linnhoff-Popien
In the field of wildlife observation and conservation, approaches involving machine learning on audio recordings are becoming increasingly popular.
no code implementations • 9 Jun 2023 • Michael Kölle, Alessandro Giovagnoli, Jonas Stein, Maximilian Balthasar Mansky, Julian Hager, Tobias Rohe, Robert Müller, Claudia Linnhoff-Popien
Inspired by the remarkable success of artificial neural networks across a broad spectrum of AI tasks, variational quantum circuits (VQCs) have recently seen an upsurge in quantum machine learning applications.
1 code implementation • 26 Apr 2023 • Philipp Altmann, Fabian Ritz, Leonard Feuchtinger, Jonas Nüßlein, Claudia Linnhoff-Popien, Thomy Phan
Current state-of-the-art approaches for generalization apply data augmentation techniques to increase the diversity of training data.
no code implementations • 18 Jan 2023 • Philipp Altmann, Thomy Phan, Fabian Ritz, Thomas Gabor, Claudia Linnhoff-Popien
We propose discriminative reward co-training (DIRECT) as an extension to deep reinforcement learning algorithms.
no code implementations • 18 Jan 2023 • Michael Kölle, Steffen Illium, Carsten Hahn, Lorenz Schauer, Johannes Hutter, Claudia Linnhoff-Popien
The ubiquitous availability of mobile devices capable of location tracking led to a significant rise in the collection of GPS data.
1 code implementation • 6 Jan 2023 • Philipp Altmann, Leo Sünkel, Jonas Stein, Tobias Müller, Christoph Roch, Claudia Linnhoff-Popien
However, as high-dimensional real-world applications are not yet feasible to be solved using purely quantum hardware, hybrid methods using both classical and quantum machine learning paradigms have been proposed.
no code implementations • 22 Dec 2022 • Michael Kölle, Alessandro Giovagnoli, Jonas Stein, Maximilian Balthasar Mansky, Julian Hager, Claudia Linnhoff-Popien
In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs).
no code implementations • 20 Dec 2022 • Steffen Illium, Thore Schillman, Robert Müller, Thomas Gabor, Claudia Linnhoff-Popien
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time.
no code implementations • 20 Dec 2022 • Steffen Illium, Gretchen Griffin, Michael Kölle, Maximilian Zorn, Jonas Nüßlein, Claudia Linnhoff-Popien
We primarily utilize non-linear recombination of information within an image, fragmenting and occluding small information patches.
no code implementations • 20 Dec 2022 • Steffen Illium, Maximilian Zorn, Cristian Lenta, Michael Kölle, Claudia Linnhoff-Popien, Thomas Gabor
We introduce organism networks, which function like a single neural network but are composed of several neural particle networks; while each particle network fulfils the role of a single weight application within the organism network, it is also trained to self-replicate its own weights.
no code implementations • 10 Aug 2022 • Fabian Ritz, Thomy Phan, Andreas Sedlmeier, Philipp Altmann, Jan Wieghardt, Reiner Schmid, Horst Sauer, Cornel Klein, Claudia Linnhoff-Popien, Thomas Gabor
We define a comprehensive SD process model for ML that encompasses most tasks and artifacts described in the literature in a consistent way.
no code implementations • 15 Jul 2022 • Kyrill Schmid, Lenz Belzner, Robert Müller, Johannes Tochtermann, Claudia Linnhoff-Popien
Some of the most relevant future applications of multi-agent systems like autonomous driving or factories as a service display mixed-motive scenarios, where agents might have conflicting goals.
1 code implementation • 24 Jun 2022 • Jonas Nüßlein, Christoph Roch, Thomas Gabor, Jonas Stein, Claudia Linnhoff-Popien, Sebastian Feld
A common approach to realising BBO is to learn a surrogate model which approximates the target black-box function which can then be solved via white-box optimization methods.
1 code implementation • 12 Jun 2022 • Jonas Nüßlein, Steffen Illium, Robert Müller, Thomas Gabor, Claudia Linnhoff-Popien
As a prior, we assume that the higher-level strategy is to reach an unknown target state area, which we hypothesize is a valid prior for many domains in Reinforcement Learning.
no code implementations • 14 Dec 2021 • Andreas Sedlmeier, Michael Kölle, Robert Müller, Leo Baudrexel, Claudia Linnhoff-Popien
In this work, we analyze existing and propose new metrics for the detection and quantification of multimodal uncertainty in RL based World Models.
1 code implementation • NeurIPS 2021 • Thomy Phan, Fabian Ritz, Lenz Belzner, Philipp Altmann, Thomas Gabor, Claudia Linnhoff-Popien
We evaluate VAST in three multi-agent domains and show that VAST can significantly outperform state-of-the-art VFF, when the number of agents is sufficiently large.
1 code implementation • ALIFE 2021 • Fabian Ritz, Daniel Ratke, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien
This paper considers sustainable and cooperative behavior in multi-agent systems.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 14 Dec 2020 • Fabian Ritz, Thomy Phan, Robert Müller, Thomas Gabor, Andreas Sedlmeier, Marc Zeller, Jan Wieghardt, Reiner Schmid, Horst Sauer, Cornel Klein, Claudia Linnhoff-Popien
A characteristic of reinforcement learning is the ability to develop unforeseen strategies when solving problems.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 11 Dec 2020 • Robert Müller, Steffen Illium, Fabian Ritz, Tobias Schröder, Christian Platschek, Jörg Ochs, Claudia Linnhoff-Popien
In this work, we present a general procedure for acoustic leak detection in water networks that satisfies multiple real-world constraints such as energy efficiency and ease of deployment.
no code implementations • 11 Aug 2020 • Steffen Illium, Robert Müller, Andreas Sedlmeier, Claudia Linnhoff-Popien
In many fields of research, labeled datasets are hard to acquire.
no code implementations • 15 Jul 2020 • Stefan Langer, Liza Obermeier, André Ebert, Markus Friedrich, Emma Munisamy, Claudia Linnhoff-Popien
That is why finding stations playing the preferred content is a tough task for a potential listener, especially due to the overwhelming number of offered choices.
no code implementations • 5 Jun 2020 • Robert Müller, Fabian Ritz, Steffen Illium, Claudia Linnhoff-Popien
In industrial applications, the early detection of malfunctioning factory machinery is crucial.
no code implementations • 25 May 2020 • Andreas Sedlmeier, Robert Müller, Steffen Illium, Claudia Linnhoff-Popien
One critical prerequisite for the deployment of reinforcement learning systems in the real world is the ability to reliably detect situations on which the agent was not trained.
no code implementations • 29 Apr 2020 • Thomas Gabor, Leo Sünkel, Fabian Ritz, Thomy Phan, Lenz Belzner, Christoph Roch, Sebastian Feld, Claudia Linnhoff-Popien
We discuss the synergetic connection between quantum computing and artificial intelligence.
no code implementations • 29 Apr 2020 • Thomas Gabor, Sebastian Feld, Hila Safi, Thomy Phan, Claudia Linnhoff-Popien
Current hardware limitations restrict the potential when solving quadratic unconstrained binary optimization (QUBO) problems via the quantum approximate optimization algorithm (QAOA) or quantum annealing (QA).
no code implementations • 30 Mar 2020 • Sebastian Feld, Markus Friedrich, Claudia Linnhoff-Popien
The compression of geometry data is an important aspect of bandwidth-efficient data transfer for distributed 3d computer vision applications.
2 code implementations • 11 Mar 2020 • Christoph Roch, Alexander Impertro, Thomy Phan, Thomas Gabor, Sebastian Feld, Claudia Linnhoff-Popien
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
Quantum Physics
no code implementations • 31 Dec 2019 • Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien
We further present a first viable solution for calculating a dynamic classification threshold, based on the uncertainty distribution of the training data.
no code implementations • 5 Aug 2019 • Stefan Langer, Robert Müller, Kyrill Schmid, Claudia Linnhoff-Popien
The difficulty of mountainbike downhill trails is a subjective perception.
no code implementations • 30 Jul 2019 • Robert Müller, Stefan Langer, Fabian Ritz, Christoph Roch, Steffen Illium, Claudia Linnhoff-Popien
In this work we present STEVE - Soccer TEam VEctors, a principled approach for learning real valued vectors for soccer teams where similar teams are close to each other in the resulting vector space.
1 code implementation • 11 Jul 2019 • Thomy Phan, Thomas Gabor, Robert Müller, Christoph Roch, Claudia Linnhoff-Popien
We propose Stable Yet Memory Bounded Open-Loop (SYMBOL) planning, a general memory bounded approach to partially observable open-loop planning.
no code implementations • 10 May 2019 • Carsten Hahn, Thomy Phan, Thomas Gabor, Lenz Belzner, Claudia Linnhoff-Popien
In nature, flocking or swarm behavior is observed in many species as it has beneficial properties like reducing the probability of being caught by a predator.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 10 May 2019 • Thomy Phan, Lenz Belzner, Marie Kiermeier, Markus Friedrich, Kyrill Schmid, Claudia Linnhoff-Popien
State-of-the-art approaches to partially observable planning like POMCP are based on stochastic tree search.
no code implementations • 25 Jan 2019 • Thomy Phan, Kyrill Schmid, Lenz Belzner, Thomas Gabor, Sebastian Feld, Claudia Linnhoff-Popien
We experimentally evaluate STEP in two challenging and stochastic domains with large state and joint action spaces and show that STEP is able to learn stronger policies than standard multi-agent reinforcement learning algorithms, when combining multi-agent open-loop planning with centralized function approximation.
no code implementations • 8 Jan 2019 • Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien
Although prior work has shown that dropout-based variational inference techniques and bootstrap-based approaches can be used to model epistemic uncertainty, the suitability for detecting OOD samples in deep reinforcement learning remains an open question.
no code implementations • 30 Oct 2018 • Thomas Gabor, Lenz Belzner, Claudia Linnhoff-Popien
Diversity is an important factor in evolutionary algorithms to prevent premature convergence towards a single local optimum.