1 code implementation • 21 Mar 2025 • Abhijeet Pendyala, Tobias Glasmachers
In this work, we augment reinforcement learning with an inference-time collision model to ensure safe and efficient container management in a waste-sorting facility with limited processing capacity.
no code implementations • 13 Mar 2025 • Tom Maus, Nico Zengeler, Tobias Glasmachers
We present a novel reinforcement learning (RL) environment designed to both optimize industrial sorting systems and study agent behavior in evolving spaces.
no code implementations • 20 Dec 2024 • Tobias Glasmachers
We design a class of variable metric evolution strategies well suited for high-dimensional problems.
no code implementations • 16 Dec 2024 • Tim Sziburis, Susanne Blex, Tobias Glasmachers, Ioannis Iossifidis
The identification of individual movement characteristics sets the foundation for the assessment of personal rehabilitation progress and can provide diagnostic information on levels and stages of movement disorders.
no code implementations • 5 Jun 2024 • Omair Ali, Muhammad Saif-ur-Rehman, Marita Metzler, Tobias Glasmachers, Ioannis Iossifidis, Christian Klaes
By adopting the successful training strategies of the NLP domain for BCIs, the GET sets a new standard for the development and application of neural signal generation technologies.
no code implementations • 3 Apr 2024 • Abhijeet Pendyala, Asma Atamna, Tobias Glasmachers
We present a proximal policy optimization (PPO) agent trained through curriculum learning (CL) principles and meticulous reward engineering to optimize a real-world high-throughput waste sorting facility.
no code implementations • 29 Feb 2024 • Pavlos Rath-Manakidis, Frederik Strothmann, Tobias Glasmachers, Laurenz Wiskott
Interpretation and visualization of the behavior of detection transformers tends to highlight the locations in the image that the model attends to, but it provides limited insight into the \emph{semantics} that the model is focusing on.
no code implementations • 31 Dec 2023 • Tim Sziburis, Susanne Blex, Tobias Glasmachers, Ioannis Iossifidis
We introduce a systematic dataset of 3D center-out task-space trajectories of human hand transport movements in a natural setting.
no code implementations • 16 Oct 2023 • Simon Hakenes, Tobias Glasmachers
This work addresses the challenge of navigating expansive spaces with sparse rewards through Reinforcement Learning (RL).
no code implementations • 1 Aug 2023 • Stephan Johann Lehmler, Muhammad Saif-ur-Rehman, Tobias Glasmachers, Ioannis Iossifidis
Our approach models activation patterns of thresholded nodes in (deep) artificial neural networks as stochastic processes.
1 code implementation • 6 Jul 2023 • Abhijeet Pendyala, Justin Dettmer, Tobias Glasmachers, Asma Atamna
It is sufficiently versatile to evaluate reinforcement learning algorithms on any real-world problem that fits our resource allocation framework.
no code implementations • 29 Dec 2022 • Felix Grün, Muhammad Saif-ur-Rehman, Tobias Glasmachers, Ioannis Iossifidis
In recent years distributional reinforcement learning has produced many state of the art results.
no code implementations • 3 Jul 2022 • Tobias Glasmachers
Support vector machines (SVMs) are a standard method in the machine learning toolbox, in particular for tabular data.
no code implementations • 21 Jun 2022 • Omair Ali, Muhammad Saif-ur-Rehman, Tobias Glasmachers, Ioannis Iossifidis, Christian Klaes
Approach: In this work, we introduce a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that is equally useful for EEG-HMI and EMG-HMI paradigms.
no code implementations • 27 Jan 2022 • Marie D. Schmidt, Tobias Glasmachers, Ioannis Iossifidis
Voluntary human motion is the product of muscle activity that results from upstream motion planning of the motor cortical areas.
no code implementations • 30 Dec 2021 • Stephan Johann Lehmler, Muhammad Saif-ur-Rehman, Tobias Glasmachers, Ioannis Iossifidis
In this study, we investigate the effectiveness of transfer learning using weight initialization for recalibration of two different pretrained deep learning models on a new subjects data, and compare their performance to subject-specific models.
no code implementations • 1 Dec 2021 • Tobias Glasmachers
It is non-standard in that we do not even aim to estimate hitting times based on drift.
no code implementations • 29 Sep 2021 • Giuseppe Cuccu, Luca Sven Rolshoven, Fabien Vorpe, Philippe Cudre-Mauroux, Tobias Glasmachers
We present a novel framework for Distributing Black-Box Optimization (DiBB).
no code implementations • 30 Nov 2020 • Omair Ali, Muhammad Saif-ur-Rehman, Susanne Dyck, Tobias Glasmachers, Ioannis Iossifidis, Christian Klaes
GNAA is not only an augmentation method but is also used to harness adversarial inputs in EEG data, which not only improves the classification accuracy but also enhances the robustness of the classifier.
no code implementations • 11 Nov 2020 • Nils Müller, Tobias Glasmachers
In stochastic optimization, particularly in evolutionary computation and reinforcement learning, the optimization of a function $f: \Omega \to \mathbb{R}$ is often addressed through optimizing a so-called relaxation $\theta \in \Theta \mapsto \mathbb{E}_\theta(f)$ of $f$, where $\Theta$ resembles the parameters of a family of probability measures on $\Omega$.
no code implementations • 20 Sep 2020 • Hlynur Davíð Hlynsson, Merlin Schüler, Robin Schiewer, Tobias Glasmachers, Laurenz Wiskott
The prediction function is used as a forward model for search on a graph in a viewpoint-matching task and the representation learned to maximize predictability is found to outperform a pre-trained representation.
no code implementations • 14 Sep 2020 • Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers
Vehicle shape information is very important in Intelligent Traffic Systems (ITS).
no code implementations • 14 Sep 2020 • Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers
It will be explained in detail how to improve the performance of this method using a trained network, which is designed for the classification.
no code implementations • 6 Sep 2020 • Tobias Glasmachers, Oswin Krause
The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function.
1 code implementation • 16 Apr 2020 • Declan Oller, Tobias Glasmachers, Giuseppe Cuccu
We propose a novel method for analyzing and visualizing the complexity of standard reinforcement learning (RL) benchmarks based on score distributions.
no code implementations • 30 Mar 2020 • Tobias Glasmachers, Oswin Krause
We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed.
no code implementations • 23 Dec 2019 • Muhammad Saif-ur-Rehman, Omair Ali, Robin Lienkaemper, Sussane Dyck, Marita Metzler, Yaroslav Parpaley, Joerg Wellmer, Charles Liu, Brian Lee, Spencer Kellis, Richard Andersen, Ioannis Iossifidis, Tobias Glasmachers, Christian Klaes
We proposed a novel spike sorting pipeline, based on a set of supervised and unsupervised learning algorithms.
no code implementations • 15 May 2019 • Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers
In order to facilitate and accelerate the progress in this subject, we will present our way to collect and to label a large scale data set.
no code implementations • 23 Oct 2018 • Tobias Glasmachers
In this paper we analyze the specific challenges that can be posed by quadratic functions in the bi-objective case.
no code implementations • 26 Jun 2018 • Tobias Glasmachers, Sahar Qaadan
Limiting the model size of a kernel support vector machine to a pre-defined budget is a well-established technique that allows to scale SVM learning and prediction to large-scale data.
no code implementations • 26 Jun 2018 • Sahar Qaadan, Merlin Schüler, Tobias Glasmachers
We present a dual subspace ascent algorithm for support vector machine training that respects a budget constraint limiting the number of support vectors.
no code implementations • 26 Jun 2018 • Sahar Qaadan, Tobias Glasmachers
Budgeted Stochastic Gradient Descent (BSGD) is a state-of-the-art technique for training large-scale kernelized support vector machines.
1 code implementation • 4 Jun 2018 • Nils Müller, Tobias Glasmachers
Our results give insights into which algorithmic mechanisms of modern ES are of value for the class of problems at hand, and they reveal principled limitations of the approach.
no code implementations • 9 Feb 2018 • Youhei Akimoto, Anne Auger, Tobias Glasmachers
This paper explores the use of the standard approach for proving runtime bounds in discrete domains---often referred to as drift analysis---in the context of optimization on a continuous domain.
no code implementations • 9 Jun 2017 • Tobias Glasmachers
We establish global convergence of the (1+1) evolution strategy, i. e., convergence to a critical point independent of the initial state.
2 code implementations • 18 May 2017 • Ilya Loshchilov, Tobias Glasmachers, Hans-Georg Beyer
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is a popular method to deal with nonconvex and/or stochastic optimization problems when the gradient information is not available.
no code implementations • 26 Apr 2017 • Tobias Glasmachers
End-to-end learning system is specifically designed so that all modules are differentiable.
no code implementations • 9 May 2016 • Ilya Loshchilov, Tobias Glasmachers
We propose a multi-objective optimization algorithm aimed at achieving good anytime performance over a wide range of problems.
no code implementations • 10 Feb 2016 • Aydin Demircioglu, Daniel Horn, Tobias Glasmachers, Bernd Bischl, Claus Weihs
Kernelized Support Vector Machines (SVMs) are among the best performing supervised learning methods.
no code implementations • 15 Jan 2014 • Tobias Glasmachers, Ürün Dogan
Coordinate descent (CD) algorithms have become the method of choice for solving a number of optimization problems in machine learning.
no code implementations • 31 Jul 2013 • Tobias Glasmachers
The sequential minimal optimization (SMO) algorithm and variants thereof are the de facto standard method for solving large quadratic programs for support vector machine (SVM) training.
no code implementations • 2 May 2013 • Somayeh Danafar, Paola M. V. Rancoita, Tobias Glasmachers, Kevin Whittingstall, Juergen Schmidhuber
Do two data samples come from different distributions?
1 code implementation • 22 Jun 2011 • Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jürgen Schmidhuber
This paper presents Natural Evolution Strategies (NES), a recent family of algorithms that constitute a more principled approach to black-box optimization than established evolutionary algorithms.
no code implementations • NeurIPS 2010 • Tobias Glasmachers
Steinwart was the first to prove universal consistency of support vector machine classification.