1 code implementation • 17 Apr 2025 • Riza Velioglu, Petra Bevandic, Robin Chan, Barbara Hammer
VTON generates images of a person in a specified garment using a target photo and a standardized garment image, while a more challenging variant, Person-to-Person Virtual Try-On (p2p-VTON), uses a photo of another person wearing the garment.
no code implementations • 16 Apr 2025 • Thorben Markmann, Michiel Straat, Sebastian Peitz, Barbara Hammer
We investigate the generalizability of control across varying initial conditions and turbulence levels and introduce a reward shaping technique to accelerate the training.
no code implementations • 11 Mar 2025 • Bhargav Acharya, William Saakyan, Barbara Hammer, Hanna Drimalla
Specifically, we evaluate four classical methods and four deep learning-based rPPG estimation methods in terms of their generalization ability to changing scenarios, including low lighting conditions and elevated heart rates.
no code implementations • 5 Mar 2025 • Isaac Roberts, Alexander Schulz, Sarah Schroeder, Fabian Hinder, Barbara Hammer
In this work, we propose to explain the uncertainty in high-dimensional data classification settings by means of concept activation vectors which give rise to local and global explanations of uncertainty.
1 code implementation • 11 Feb 2025 • Inaam Ashraf, André Artelt, Barbara Hammer
Water distribution systems (WDSs) are an important part of critical infrastructure becoming increasingly significant in the face of climate change and urban population growth.
no code implementations • 28 Jan 2025 • Fabian Fumagalli, Maximilian Muschalik, Paolo Frazzetto, Janine Strotherm, Luca Hermes, Alessandro Sperduti, Eyke Hüllermeier, Barbara Hammer
In explainable artificial intelligence (XAI), the Shapley Value (SV) is the predominant method to quantify contributions of individual features to a ML model's output.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
1 code implementation • 27 Jan 2025 • Michiel Straat, Thorben Markmann, Barbara Hammer
We train Fourier Neural Operator (FNO) surrogate models for Rayleigh-B\'enard Convection (RBC), a model for convection processes that occur in nature and industrial settings.
1 code implementation • 22 Dec 2024 • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer, Julia Herbinger
Feature-based explanations, using perturbations or gradients, are a prevalent tool to understand decisions of black box machine learning models.
no code implementations • 12 Dec 2024 • Fabian Hinder, Valerie Vaquet, David Komnick, Barbara Hammer
Besides the classical offline setup of machine learning, stream learning constitutes a well-established setup where data arrives over time in potentially non-stationary environments.
1 code implementation • 27 Nov 2024 • Riza Velioglu, Petra Bevandic, Robin Chan, Barbara Hammer
This paper introduces Virtual Try-Off (VTOFF), a novel task focused on generating standardized garment images from single photos of clothed individuals.
Ranked #1 on
Virtual Try-Off
on VITON-HD
no code implementations • 25 Nov 2024 • Fabian Hinder, Valerie Vaquet, Barbara Hammer
Concept drift refers to the change of data distributions over time.
no code implementations • 23 Nov 2024 • Filip Ilievski, Barbara Hammer, Frank van Harmelen, Benjamin Paassen, Sascha Saralajew, Ute Schmid, Michael Biehl, Marianna Bolognesi, Xin Luna Dong, Kiril Gashteovski, Pascal Hitzler, Giuseppe Marra, Pasquale Minervini, Martin Mundt, Axel-Cyrille Ngonga Ngomo, Alessandro Oltramari, Gabriella Pasi, Zeynep G. Saribatur, Luciano Serafini, John Shawe-Taylor, Vered Shwartz, Gabriella Skitalinskaya, Clemens Stachl, Gido M. van de Ven, Thomas Villmann
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
1 code implementation • 17 Oct 2024 • Janine Strotherm, Barbara Hammer
As relevant examples such as the future criminal detection software [1] show, fairness of AI-based and social domain affecting decision support tools constitutes an important area of research.
1 code implementation • 16 Oct 2024 • Felix Störck, Fabian Hinder, Johannes Brinkrolf, Benjamin Paassen, Valerie Vaquet, Barbara Hammer
The contribution of this work is twofold: 1) we develop a general framework for fair machine learning of partition-based models that does not depend on a specific fairness definition, and 2) we derive a fair version of learning vector quantization (LVQ) as a specific instantiation.
no code implementations • 16 Oct 2024 • Valerie Vaquet, Fabian Hinder, André Artelt, Inaam Ashraf, Janine Strotherm, Jonas Vaquet, Johannes Brinkrolf, Barbara Hammer
Research on methods for planning and controlling water distribution networks gains increasing relevance as the availability of drinking water will decrease as a consequence of climate change.
1 code implementation • 2 Oct 2024 • Maximilian Muschalik, Hubert Baniecki, Fabian Fumagalli, Patrick Kolpaczki, Barbara Hammer, Eyke Hüllermeier
In this work, we introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute SVs and any-order SIs in an application-agnostic framework.
1 code implementation • 30 Jul 2024 • Philip Kenneweg, Tristan Kenneweg, Fabian Fumagalli, Barbara Hammer
In recent studies, line search methods have been demonstrated to significantly enhance the performance of conventional stochastic gradient descent techniques across various datasets and architectures, while making an otherwise critical choice of learning rate schedule superfluous.
no code implementations • 5 Jun 2024 • André Artelt, Barbara Hammer
In this work, we apply the concept of data valuation to the significant area of model evaluations, focusing on how individual training samples impact a model's internal reasoning rather than the predictive performance only.
no code implementations • 4 Jun 2024 • André Artelt, Marios S. Kyriakou, Stelios G. Vrachimis, Demetrios G. Eliades, Barbara Hammer, Marios M. Polycarpou
Drinking water is a vital resource for humanity, and thus, Water Distribution Networks (WDNs) are considered critical infrastructures in modern societies.
no code implementations • 17 May 2024 • Fabian Fumagalli, Maximilian Muschalik, Patrick Kolpaczki, Eyke Hüllermeier, Barbara Hammer
As a result, we propose KernelSHAP-IQ, a direct extension of KernelSHAP for SII, and demonstrate state-of-the-art performance for feature interactions.
1 code implementation • 16 May 2024 • Christian Internò, Elena Raponi, Niki van Stein, Thomas Bäck, Markus Olhofer, Yaochu Jin, Barbara Hammer
The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning.
1 code implementation • 10 May 2024 • Thorben Markmann, Michiel Straat, Barbara Hammer
We conjecture that this is due to the LRAN's flexibility in learning complicated observables from data, thereby serving as a viable surrogate model for the main structure of fluid dynamics in turbulent convection settings.
1 code implementation • 12 Apr 2024 • Riza Velioglu, Robin Chan, Barbara Hammer
In the realm of fashion object detection and segmentation for online shopping images, existing state-of-the-art fashion parsing models encounter limitations, particularly when exposed to non-model-worn apparel and close-up shots.
1 code implementation • 27 Mar 2024 • Inaam Ashraf, Janine Strotherm, Luca Hermes, Barbara Hammer
In this realm, we propose a novel and efficient machine learning emulator, more precisely, a physics-informed deep learning (DL) model, for hydraulic state estimation in WDS.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Sarah Schröder, Alexander Schulz, Barbara Hammer
It is problematic that most debiasing approaches are directly transferred from word embeddings, therefore these approaches fail to take into account the nonlinear nature of sentence embedders and the embeddings they produce.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Leonardo Galli, Tristan Kenneweg, Barbara Hammer
Recent works have shown that line search methods greatly increase performance of traditional stochastic gradient descent methods on a variety of datasets and architectures [1], [2].
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Tristan Kenneweg, Barbara Hammer
In recent studies, line search methods have shown significant improvements in the performance of traditional stochastic gradient descent techniques, eliminating the need for a specific learning rate schedule.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Sarah Schröder, Barbara Hammer
Pre training of language models on large text corpora is common practice in Natural Language Processing.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Alexander Schulz, Sarah Schröder, Barbara Hammer
We combine the learning rate distributions thus found and show that they generalize to better performance with respect to the problem of catastrophic forgetting.
1 code implementation • 26 Mar 2024 • Isaac Roberts, Alexander Schulz, Luca Hermes, Barbara Hammer
Attention based Large Language Models (LLMs) are the state-of-the-art in natural language processing (NLP).
1 code implementation • 26 Feb 2024 • Tristan Kenneweg, Philip Kenneweg, Barbara Hammer
We use a dataset created this way for the development and evaluation of a boolean agent RAG setup: A system in which a LLM can decide whether to query a vector database or not, thus saving tokens on questions that can be answered with internal knowledge.
1 code implementation • 13 Feb 2024 • André Artelt, Shubham Sharma, Freddy Lecué, Barbara Hammer
This work studies the vulnerability of counterfactual explanations to data poisoning.
1 code implementation • 27 Jan 2024 • Sarah Schröder, Alexander Schulz, Fabian Hinder, Barbara Hammer
Furthermore, we formally analyze cosine based scores from the literature with regard to these requirements.
1 code implementation • 22 Jan 2024 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
1 code implementation • 3 Jan 2024 • Valerie Vaquet, Fabian Hinder, Barbara Hammer
In this work, we explore the potential of model-loss-based and distribution-based drift detection methods to tackle leakage detection.
1 code implementation • 15 Dec 2023 • Fabian Hinder, Valerie Vaquet, Barbara Hammer
Concept drift, i. e., the change of the data generating distribution, can render machine learning models inaccurate.
no code implementations • 24 Oct 2023 • Fabian Hinder, Valerie Vaquet, Barbara Hammer
In addition to providing a systematic literature review, this work provides precise mathematical definitions of the considered problems and contains standardized experiments on parametric artificial datasets allowing for a direct comparison of different strategies for detection and localization.
1 code implementation • 24 Oct 2023 • Valerie Vaquet, Fabian Hinder, Jonas Vaquet, Kathrin Lammers, Lars Quakernack, Barbara Hammer
Facing climate change, the already limited availability of drinking water will decrease in the future rendering drinking water an increasingly scarce resource.
no code implementations • 17 Jul 2023 • Janine Strotherm, Alissa Müller, Barbara Hammer, Benjamin Paaßen
We explain the main fairness definitions and strategies for achieving fairness using concrete examples and place fairness research in the European context.
1 code implementation • 13 Jun 2023 • Ulrike Kuhl, André Artelt, Barbara Hammer
However, potential benefits and drawbacks of the directionality of CFEs for user behavior in xAI remain unclear.
1 code implementation • 13 Jun 2023 • Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier
Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-box machine learning models.
1 code implementation • 25 May 2023 • Paul Stahlhofen, André Artelt, Luca Hermes, Barbara Hammer
Many Machine Learning models are vulnerable to adversarial attacks: There exist methodologies that add a small (imperceptible) perturbation to an input such that the model comes up with a wrong prediction.
no code implementations • 16 Mar 2023 • Fabian Hinder, Valerie Vaquet, Johannes Brinkrolf, Barbara Hammer
To do so, we propose a methodology to reduce the explanation of concept drift to an explanation of models that are trained in a suitable way extracting relevant information regarding the drift.
no code implementations • 2 Mar 2023 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+2
no code implementations • 8 Feb 2023 • Valerie Vaquet, Fabian Hinder, Johannes Brinkrolf, Barbara Hammer
Learning from non-stationary data streams is a research direction that gains increasing interest as more data in form of streams becomes available, for example from social media, smartphones, or industrial process monitoring.
no code implementations • 2 Dec 2022 • Fabian Hinder, Valerie Vaquet, Johannes Brinkrolf, Barbara Hammer
More precisely, we relate a change of the ITTE to the presence of real drift, i. e., a changed posterior, and to a change of the training result under the assumption of optimality.
1 code implementation • 1 Dec 2022 • Riza Velioglu, Jan Philip Göpfert, André Artelt, Barbara Hammer
On the other hand, Machine Learning (ML) benefits from the vast amount of data available and can deal with high-dimensional sources, yet it has rarely been applied to being used in processes.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
1 code implementation • 27 Nov 2022 • André Artelt, Barbara Hammer
Counterfactual explanations are a popular type of explanation for making the outcomes of a decision making system transparent to the user.
1 code implementation • 23 Nov 2022 • André Artelt, Kleanthis Malialis, Christos Panayiotou, Marios Polycarpou, Barbara Hammer
Consequently, learning models operating on the data stream might become obsolete, and need costly and difficult adjustments such as retraining or adaptation.
1 code implementation • 21 Nov 2022 • Dominik Stallmann, Philip Kenneweg, Barbara Hammer
We make the data sets available at https://pub. uni-bielefeld. de/record/2960030.
1 code implementation • 17 Nov 2022 • Inaam Ashraf, Luca Hermes, André Artelt, Barbara Hammer
We investigate the task of missing value estimation in graphs as given by water distribution systems (WDS) based on sparse signals as a representative machine learning challenge in the domain of critical infrastructure.
no code implementations • 5 Sep 2022 • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer
Explainable Artificial Intelligence (XAI) has mainly focused on static learning scenarios so far.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+1
2 code implementations • 5 Jul 2022 • André Artelt, Barbara Hammer
In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet.
1 code implementation • 15 Jun 2022 • André Artelt, Alexander Schulz, Barbara Hammer
Dimensionality reduction is a popular preprocessing and a widely used tool in data mining.
1 code implementation • 18 May 2022 • André Artelt, Stelios Vrachimis, Demetrios Eliades, Marios Polycarpou, Barbara Hammer
Transparency is a major requirement of modern AI based decision making systems deployed in real world.
1 code implementation • 16 May 2022 • André Artelt, Roel Visser, Barbara Hammer
The application of machine learning based decision making systems in safety critical areas requires reliable high certainty predictions.
no code implementations • 13 May 2022 • Fabian Hinder, André Artelt, Valerie Vaquet, Barbara Hammer
The notion of concept drift refers to the phenomenon that the data generating distribution changes over time; as a consequence machine learning models may become inaccurate and need adjustment.
1 code implementation • 11 May 2022 • Ulrike Kuhl, André Artelt, Barbara Hammer
Following the view of psychological plausibility as comparative similarity, this may be explained by the fact that users in the closest condition experience their CFEs as more psychologically plausible than the computationally plausible counterpart.
1 code implementation • 6 May 2022 • Ulrike Kuhl, André Artelt, Barbara Hammer
Thus, to advance the field of XAI, we introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
1 code implementation • 14 Apr 2022 • Andrea Castellani, Sebastian Schmitt, Barbara Hammer
Furthermore, we propose a drift-dependent dynamic budget strategy, which uses a variable distribution of the labelling budget over time, after a detected drift.
1 code implementation • 4 Apr 2022 • Jonathan Jakob, André Artelt, Martina Hasenjäger, Barbara Hammer
In this work, we propose an adaption of the incremental SAM-kNN classifier for regression to build a residual based anomaly detection system for water distribution networks that is able to adapt to any kind of change.
no code implementations • 28 Mar 2022 • Sarah Schröder, Alexander Schulz, Barbara Hammer
With the enourmous popularity of large language models, many researchers have raised ethical concerns regarding social biases incorporated in such models.
1 code implementation • 21 Feb 2022 • Lisa Kühnel, Alexander Schulz, Barbara Hammer, Juliane Fluck
Recent developments in transfer learning have boosted the advancements in natural language processing tasks.
no code implementations • 19 Feb 2022 • Fabian Hinder, Valerie Vaquet, Barbara Hammer
In this paper, we analyze structural properties of the drift induced signals in the context of different metrics.
1 code implementation • 15 Feb 2022 • André Artelt, Johannes Brinkrolf, Roel Visser, Barbara Hammer
While machine learning models are usually assumed to always output a prediction, there also exist extensions in the form of reject options which allow the model to reject inputs where only a prediction with an unacceptably low certainty would be possible.
1 code implementation • 11 Feb 2022 • Luca Hermes, Barbara Hammer, Andrew Melnik, Riza Velioglu, Markus Vieth, Malte Schilling
Accurate traffic prediction is a key ingredient to enable traffic management like rerouting cars to reduce road congestion or regulating traffic via dynamic speed limits to maintain a steady flow.
no code implementations • 15 Nov 2021 • Sarah Schröder, Alexander Schulz, Philip Kenneweg, Robert Feldhans, Fabian Hinder, Barbara Hammer
However, lately some works have raised doubts about these metrics showing that even though such metrics report low biases, other tests still show biases.
1 code implementation • 10 Oct 2021 • Luca Hermes, Barbara Hammer, Malte Schilling
Prediction of movements is essential for successful cooperation with intelligent systems.
1 code implementation • 16 Aug 2021 • Andrea Castellani, Sebastian Schmitt, Barbara Hammer
In the proposed framework, the actual method to detect a change in the statistics of incoming data samples can be chosen freely.
no code implementations • 30 Jun 2021 • Daniel Wiens, Barbara Hammer
Finding such a step size, without increasing the computational effort of single-step adversarial training, is still an open challenge.
1 code implementation • 17 May 2021 • André Artelt, Barbara Hammer
Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.
1 code implementation • 4 May 2021 • Benjamin Paaßen, Alexander Schulz, Barbara Hammer
In this paper, we introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages and circumvents the training problem by training only the output layer of a recurrent net and employing auxiliary information during training about the desired interaction with a stack.
1 code implementation • 1 May 2021 • Andrea Castellani, Sebastian Schmitt, Barbara Hammer
However, sensor failures result in mislabeled training data samples which are hard to detect and remove from the dataset.
1 code implementation • 6 Apr 2021 • André Artelt, Fabian Hinder, Valerie Vaquet, Robert Feldhans, Barbara Hammer
We also propose a method for automatically finding regions in data space that are affected by a given model adaptation and thus should be explained.
1 code implementation • 3 Mar 2021 • André Artelt, Valerie Vaquet, Riza Velioglu, Fabian Hinder, Johannes Brinkrolf, Malte Schilling, Barbara Hammer
Counterfactual explanations explain a behavior to the user by proposing actions -- as changes to the input -- that would cause a different (specified) behavior of the system.
no code implementations • 25 Dec 2020 • Jan Philip Göpfert, Ulrike Kuhl, Lukas Hindemith, Heiko Wersing, Barbara Hammer
After developing a theoretical framework of intuitiveness as a property of algorithms, we introduce an active teaching paradigm involving a prototypical two-dimensional spatial learning task as a method to judge the efficacy of human-machine interactions.
1 code implementation • 1 Dec 2020 • Fabian Hinder, Jonathan Jakob, Barbara Hammer
The notion of concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
no code implementations • 8 Nov 2020 • Jan Philip Göpfert, Heiko Wersing, Barbara Hammer
When training automated systems, it has been shown to be beneficial to adapt the representation of data by learning a problem-specific metric.
1 code implementation • 20 Oct 2020 • Dominik Stallmann, Jan P. Göpfert, Julian Schmitz, Alexander Grünberger, Barbara Hammer
Motivation: Innovative microfluidic systems carry the promise to greatly facilitate spatio-temporal analysis of single cells under well-defined environmental conditions, allowing novel insights into population heterogeneity and opening new opportunities for fundamental and applied biotechnology.
1 code implementation • 6 Oct 2020 • André Artelt, Barbara Hammer
With the increasing deployment of machine learning systems in practice, transparency and explainability have become serious issues.
1 code implementation • 14 Sep 2020 • Benjamin Paaßen, Alexander Schulz, Terrence C. Stewart, Barbara Hammer
Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal.
no code implementations • 23 Jun 2020 • Fabian Hinder, Barbara Hammer
The notion of concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time; as a consequence machine learning models may become inaccurate and need adjustment.
no code implementations • 21 May 2020 • Michiel Straat, Fthi Abadi, Zhuoyun Kan, Christina Göpfert, Barbara Hammer, Michael Biehl
We present a modelling framework for the investigation of supervised learning in non-stationary environments.
3 code implementations • 1 Apr 2020 • Lukas Pfannschmidt, Barbara Hammer
The problem of all-relevant feature selection is concerned with finding a relevant feature set with preserved redundancies.
1 code implementation • 12 Feb 2020 • André Artelt, Barbara Hammer
The increasing deployment of machine learning as well as legal regulations such as EU's GDPR cause a need for user-friendly explanations of decisions proposed by machine learning models.
no code implementations • 10 Dec 2019 • Lukas Pfannschmidt, Jonathan Jakob, Fabian Hinder, Michael Biehl, Peter Tino, Barbara Hammer
In this contribution, we focus on feature selection paradigms, which enable us to uncover relevant factors of a given regularity based on a sparse model.
1 code implementation • 4 Dec 2019 • Fabian Hinder, André Artelt, Barbara Hammer
The notion of drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
1 code implementation • 15 Nov 2019 • André Artelt, Barbara Hammer
Due to the increasing use of machine learning in practice it becomes more and more important to be able to explain the prediction and behavior of machine learning models.
no code implementations • 12 Nov 2019 • Babak Hosseini, Romain Montagne, Barbara Hammer
Convolutional neural networks (CNNs) are deep learning frameworks which are well-known for their notable performance in classification tasks.
no code implementations • 10 Nov 2019 • Babak Hosseini, Barbara Hammer
In this paper, we propose a novel interpretable multiple-kernel prototype learning (IMKPL) to construct highly interpretable prototypes in the feature space, which are also efficient for the discriminative representation of the data.
no code implementations • 21 Oct 2019 • Jan Philip Göpfert, Heiko Wersing, Barbara Hammer
In this contribution, we focus on the capabilities of explainers for convolutional deep neural networks in an extreme situation: a setting in which humans and networks fundamentally disagree.
1 code implementation • 19 Sep 2019 • Alexander Schulz, Fabian Hinder, Barbara Hammer
So far, most methods in the literature investigate the decision of the model for a single given input datum.
no code implementations • 19 Sep 2019 • Babak Hosseini, Barbara Hammer
In this research, we propose the interpretable kernel DR algorithm (I-KDR) as a new algorithm which maps the data from the feature space to a lower dimensional space where the classes are more condensed with less overlapping.
1 code implementation • 2 Aug 2019 • André Artelt, Barbara Hammer
The increasing use of machine learning in practice and legal regulations like EU's GDPR cause the necessity to be able to explain the prediction and behavior of machine learning models.
no code implementations • 31 Jul 2019 • Christina Göpfert, Jan Philip Göpfert, Barbara Hammer
The existence of adversarial examples has led to considerable uncertainty regarding the trust one can justifiably put in predictions produced by automated systems.
no code implementations • 18 Mar 2019 • Michael Biehl, Fthi Abadi, Christina Göpfert, Barbara Hammer
We present a modelling framework for the investigation of prototype-based classifiers in non-stationary environments.
no code implementations • 12 Mar 2019 • Babak Hosseini, Barbara Hammer
In this work, we propose a novel confident K-SRC and dictionary learning algorithm (CKSC) which focuses on the discriminative reconstruction of the data based on its representation in the kernel space.
no code implementations • 12 Mar 2019 • Babak Hosseini, Barbara Hammer
The NLSSC algorithm is also formulated in the kernel-based framework (NLKSSC) which can represent the nonlinear structure of data.
no code implementations • 10 Mar 2019 • Babak Hosseini, Felix Hülsmann, Mario Botsch, Barbara Hammer
We are interested in the decomposition of motion data into a sparse linear combination of base functions which enable efficient data processing.
no code implementations • 8 Mar 2019 • Babak Hosseini, Barbara Hammer
Multiple kernel learning (MKL) algorithms combine different base kernels to obtain a more efficient representation in the feature space.
no code implementations • 5 Mar 2019 • Babak Hosseini, Barbara Hammer
Furthermore, we obtain sparse encodings for unseen classes based on the learned MKD attributes, and upon which we propose a simple but effective incremental clustering algorithm to categorize the unseen MTS classes in an unsupervised way.
no code implementations • 2 Mar 2019 • Lukas Pfannschmidt, Christina Göpfert, Ursula Neumann, Dominik Heider, Barbara Hammer
Most existing feature selection methods are insufficient for analytic purposes as soon as high dimensional data or redundant sensor signals are dealt with since features can be selected due to spurious effects or correlations rather than causal effects.
1 code implementation • 25 Feb 2019 • Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer
Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue.
1 code implementation • 20 Feb 2019 • Lukas Pfannschmidt, Jonathan Jakob, Michael Biehl, Peter Tino, Barbara Hammer
The increasing occurrence of ordinal data, mainly sociodemographic, led to a renewed research interest in ordinal regression, i. e. the prediction of ordered classes.
no code implementations • 19 Dec 2018 • Cagatay Turkay, Nicola Pezzotti, Carsten Binnig, Hendrik Strobelt, Barbara Hammer, Daniel A. Keim, Jean-Daniel Fekete, Themis Palpanas, Yunhai Wang, Florin Rusu
We discuss these challenges and outline first steps towards progressiveness, which, we argue, will ultimately help to significantly speed-up the overall data science process.
no code implementations • ICML 2018 • Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, Barbara Hammer
Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart.
no code implementations • 25 Nov 2017 • Benjamin Paaßen, Alexander Schulz, Janne Hahne, Barbara Hammer
Machine learning models in practical settings are typically confronted with changes to the distribution of the incoming data.
no code implementations • 22 Aug 2017 • Benjamin Paaßen, Barbara Hammer, Thomas William Price, Tiffany Barnes, Sebastian Gross, Niels Pinkwart
In particular, we extend the Hint Factory by considering data of past students in all states which are similar to the student's current state and creating hints approximating the weighted average of all these reference states.
1 code implementation • 21 Apr 2017 • Benjamin Paaßen, Christina Göpfert, Barbara Hammer
We propose to phrase time series prediction as a regression problem and apply dissimilarity- or kernel-based regression techniques, such as 1-nearest neighbor, kernel regression and Gaussian process regression, which can be applied to graphs via graph kernels.
no code implementations • 18 Oct 2016 • Babak Hosseini, Barbara Hammer
In this paper, we address the feasibility of LMNN's optimization constraints regarding these target points, and introduce a mathematical measure to evaluate the size of the feasible region of the optimization problem.
1 code implementation • 17 Oct 2016 • Babak Hosseini, Barbara Hammer
We investigate metric learning in the context of dynamic time warping (DTW), the by far most popular dissimilarity measure used for the comparison and analysis of motion capture data.
no code implementations • 23 Mar 2015 • Lydia Fischer, Barbara Hammer, Heiko Wersing
We analyse optimum reject strategies for prototype-based classifiers and real-valued rejection measures, using the distance of a data point to the closest prototype or probabilistic counterparts.
1 code implementation • 18 Oct 2011 • Wouter Lueks, Bassam Mokbel, Michael Biehl, Barbara Hammer
We debate that quality measures, as general and flexible evaluation tools, should have parameters with a direct and intuitive interpretation as to which specific error types are tolerated or penalized.