1 code implementation • 2 Oct 2024 • Maximilian Muschalik, Hubert Baniecki, Fabian Fumagalli, Patrick Kolpaczki, Barbara Hammer, Eyke Hüllermeier
In this work, we introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute SVs and any-order SIs in an application-agnostic framework.
1 code implementation • 6 Sep 2024 • Clemens Damke, Eyke Hüllermeier
In this work, we study the influence of domain-specific characteristics when defining a meaningful notion of predictive uncertainty on graph data.
no code implementations • 14 Aug 2024 • Subhabrata Dutta, Timo Kaufmann, Goran Glavaš, Ivan Habernal, Kristian Kersting, Frauke Kreuter, Mira Mezini, Iryna Gurevych, Eyke Hüllermeier, Hinrich Schuetze
While there is a widespread belief that artificial general intelligence (AGI) -- or even superhuman AI -- is imminent, complex problems in expert domains are far from being solved.
1 code implementation • 28 Jun 2024 • Mohamed Karim Belaid, Maximilian Rabus, Eyke Hüllermeier
Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression.
1 code implementation • 25 Jun 2024 • Valentin Margraf, Marcel Wever, Sandra Gilhuber, Gabriel Marques Tavares, Thomas Seidl, Eyke Hüllermeier
This particularly holds for the combination of query strategies with different learning algorithms into active learning pipelines and examining the impact of the learning algorithm choice.
no code implementations • 24 Jun 2024 • Timo Kaufmann, Jannis Blüml, Antonia Wüst, Quentin Delfosse, Kristian Kersting, Eyke Hüllermeier
Properly defining a reward signal to efficiently train a reinforcement learning (RL) agent is a challenging task.
1 code implementation • 6 Jun 2024 • Clemens Damke, Eyke Hüllermeier
Challenging assumptions and postulates of state-of-the-art methods, we propose a novel approach that represents (epistemic) uncertainty in terms of mixtures of Dirichlet distributions and refers to the established principle of linear opinion pooling for propagating information between neighbored nodes in the graph.
1 code implementation • 4 Jun 2024 • Yusuf Sale, Paul Hofman, Timo Löhr, Lisa Wimmer, Thomas Nagler, Eyke Hüllermeier
We present a novel approach to uncertainty quantification in classification tasks based on label-wise decomposition of uncertainty measures.
1 code implementation • 2 Jun 2024 • Arduin Findeis, Timo Kaufmann, Eyke Hüllermeier, Samuel Albanie, Robert Mullins
In constitutional AI, a set of principles (or constitution) is used to provide feedback and fine-tune AI models.
no code implementations • 17 May 2024 • Fabian Fumagalli, Maximilian Muschalik, Patrick Kolpaczki, Eyke Hüllermeier, Barbara Hammer
As a result, we propose KernelSHAP-IQ, a direct extension of KernelSHAP for SII, and demonstrate state-of-the-art performance for feature interactions.
no code implementations • 3 May 2024 • Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl
We warn against a common but incomplete understanding of empirical research in machine learning that leads to non-replicable results, makes findings unreliable, and threatens to undermine progress in the field.
no code implementations • 18 Apr 2024 • Paul Hofman, Yusuf Sale, Eyke Hüllermeier
Uncertainty representation and quantification are paramount in machine learning and constitute an important prerequisite for safety-critical applications.
no code implementations • 7 Mar 2024 • Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio
We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.
1 code implementation • 16 Feb 2024 • Alireza Javanmardi, David Stutz, Eyke Hüllermeier
Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution.
1 code implementation • 14 Feb 2024 • Mira Jürgens, Nis Meinert, Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty.
1 code implementation • 25 Jan 2024 • Pritha Gupta, Marcel Wever, Eyke Hüllermeier
Though effective, emerging supervised machine learning based approaches to detect ILs are limited to binary system sensitive information and lack a comprehensive framework.
1 code implementation • 22 Jan 2024 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 30 Dec 2023 • Yusuf Sale, Paul Hofman, Lisa Wimmer, Eyke Hüllermeier, Thomas Nagler
Uncertainty quantification is a critical aspect of machine learning models, providing important insights into the reliability of predictions and aiding the decision-making process in real-world applications.
no code implementations • 22 Dec 2023 • Timo Kaufmann, Paul Weng, Viktor Bengs, Eyke Hüllermeier
Reinforcement learning from human feedback (RLHF) is a variant of reinforcement learning (RL) that learns from human feedback instead of relying on an engineered reward function.
no code implementations • 2 Dec 2023 • Yusuf Sale, Viktor Bengs, Michele Caprio, Eyke Hüllermeier
In the past couple of years, various approaches to representing and quantifying different types of predictive uncertainty in machine learning, notably in the setting of classification, have been proposed on the basis of second-order probability distributions, i. e., predictions in the form of distributions on probability distributions.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 1 Oct 2023 • Viktor Bengs, Björn Haddenhorst, Eyke Hüllermeier
We consider the task of identifying the Copeland winner(s) in a dueling bandits problem with ternary feedback.
no code implementations • 5 Sep 2023 • Amirhossein Vahidi, Simon Schoßer, Lisa Wimmer, Yawei Li, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei
In this paper, we propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN), which leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations.
no code implementations • 28 Aug 2023 • Amirhossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei
Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning.
no code implementations • 21 Aug 2023 • Sascha Henzgen, Eyke Hüllermeier
Measures of rank correlation are commonly used in statistics to capture the degree of concordance between two orderings of the same set of items.
no code implementations • 13 Jul 2023 • Michele Caprio, Yusuf Sale, Eyke Hüllermeier, Insup Lee
In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$, when the prior lies in a class of probability measures $\mathcal{P}$ and the likelihood is precise.
no code implementations • 16 Jun 2023 • Yusuf Sale, Michele Caprio, Eyke Hüllermeier
Adequate uncertainty representation and quantification have become imperative in various scientific disciplines, especially in machine learning and artificial intelligence.
1 code implementation • 13 Jun 2023 • Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier
Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-box machine learning models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 1 Jun 2023 • Alireza Javanmardi, Yusuf Sale, Paul Hofman, Eyke Hüllermeier
While the predictions produced by conformal prediction are set-valued, the data used for training and calibration is supposed to be precise.
1 code implementation • 1 Jun 2023 • Navid Ansari, Alireza Javanmardi, Eyke Hüllermeier, Hans-Peter Seidel, Vahid Babaei
Bayesian optimization (BO) provides a powerful framework for optimizing black-box, expensive-to-evaluate functions.
1 code implementation • NeurIPS 2023 • Petar Bevanda, Max Beier, Armin Lederer, Stefan Sosnowski, Eyke Hüllermeier, Sandra Hirche
Many machine learning approaches for decision making, such as reinforcement learning, rely on simulators or predictive models to forecast the time-evolution of quantities of interest, e. g., the state of an agent or the reward of a policy.
1 code implementation • 23 May 2023 • Julian Lienen, Eyke Hüllermeier
Label noise poses an important challenge in machine learning, especially in deep learning, in which large models with high expressive power dominate the field.
no code implementations • 30 Apr 2023 • Svenja Uhlemeyer, Julian Lienen, Eyke Hüllermeier, Hanno Gottschalk
We thereafter extend the DNN by $k$ empty classes and fine-tune it on the OoD data samples.
no code implementations • 2 Apr 2023 • Mohamed Karim Belaid, Dorra El Mekki, Maximilian Rabus, Eyke Hüllermeier
With the rapid growth of data availability and usage, quantifying the added value of each training data point has become a crucial process in the field of artificial intelligence.
no code implementations • 2 Mar 2023 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
1 code implementation • 1 Feb 2023 • Jasmin Brandt, Marcel Wever, Dimitrios Iliadis, Viktor Bengs, Eyke Hüllermeier
Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm.
no code implementations • 1 Feb 2023 • Patrick Kolpaczki, Viktor Bengs, Maximilian Muschalik, Eyke Hüllermeier
The Shapley value, which is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, has recently been used intensively in explainable artificial intelligence.
no code implementations • 30 Jan 2023 • Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
In this paper, we generalise these findings and prove a more fundamental result: There seems to be no loss function that provides an incentive for a second-order learner to faithfully represent its epistemic uncertainty in the same manner as proper scoring rules do for standard (first-order) learners.
1 code implementation • 30 Dec 2022 • Alireza Javanmardi, Eyke Hüllermeier
The main objective of Prognostics and Health Management is to estimate the Remaining Useful Lifetime (RUL), namely, the time that a system or a piece of equipment is still in working order before starting to function incorrectly.
1 code implementation • 1 Dec 2022 • Jasmin Brandt, Elias Schede, Viktor Bengs, Björn Haddenhorst, Eyke Hüllermeier, Kevin Tierney
We study the algorithm configuration (AC) problem, in which one seeks to find an optimal parameter configuration of a given target algorithm in an automated way.
1 code implementation • 7 Sep 2022 • Lisa Wimmer, Yusuf Sale, Paul Hofman, Bern Bischl, Eyke Hüllermeier
The quantification of aleatoric and epistemic uncertainty in terms of conditional entropy and mutual information, respectively, has recently become quite common in machine learning.
no code implementations • 5 Sep 2022 • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer
Explainable Artificial Intelligence (XAI) has mainly focused on static learning scenarios so far.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 11 Jun 2022 • Duc Anh Nguyen, Ron Levie, Julian Lienen, Gitta Kutyniok, Eyke Hüllermeier
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems.
1 code implementation • 8 Jun 2022 • Mohamed Karim Belaid, Eyke Hüllermeier, Maximilian Rabus, Ralf Krestel
On the one hand, the number of published xAI algorithms underwent a boom, and it became difficult for practitioners to select the right tool.
1 code implementation • 30 May 2022 • Julian Lienen, Caglar Demir, Eyke Hüllermeier
One such method, so-called credal self-supervised learning, maintains pseudo-supervision in the form of sets of (instead of single) probability distributions over labels, thereby allowing for a flexible yet uncertainty-aware labeling.
no code implementations • 20 May 2022 • Thomas Mortier, Viktor Bengs, Eyke Hüllermeier, Stijn Luca, Willem Waegeman
In this paper, we extend the notion of calibration, which is commonly used to evaluate the validity of the aleatoric uncertainty representation of a single probabilistic classifier, to assess the validity of an epistemic uncertainty representation obtained by sets of probabilistic classifiers.
no code implementations • 13 Mar 2022 • Thomas Mortier, Eyke Hüllermeier, Krzysztof Dembczyński, Willem Waegeman
Set-valued prediction is a well-known concept in multi-class classification.
no code implementations • 11 Mar 2022 • Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
Uncertainty quantification has received increasing attention in machine learning in the recent past.
no code implementations • 9 Feb 2022 • Viktor Bengs, Aadirupa Saha, Eyke Hüllermeier
In every round of the sequential decision problem, the learner makes a context-dependent selection of two choice alternatives (arms) to be compared with each other and receives feedback in the form of noisy preference information.
no code implementations • 9 Feb 2022 • Jasmin Brandt, Viktor Bengs, Björn Haddenhorst, Eyke Hüllermeier
We consider the combinatorial bandits problem with semi-bandit feedback under finite sampling budget constraints, in which the learner can carry out its action only for a limited number of times specified by an overall budget.
no code implementations • 3 Feb 2022 • Elias Schede, Jasmin Brandt, Alexander Tornede, Marcel Wever, Viktor Bengs, Eyke Hüllermeier, Kevin Tierney
We review existing AC literature within the lens of our taxonomies, outline relevant design choices of configuration approaches, contrast methods and problem variants against each other, and describe the state of AC in industry.
no code implementations • 2 Feb 2022 • Patrick Kolpaczki, Viktor Bengs, Eyke Hüllermeier
We propose the $\mathrm{Beat\, the\, Winner\, Reset}$ algorithm and prove a bound on its expected binary weak regret in the stationary case, which tightens the bound of current state-of-art algorithms.
no code implementations • 15 Dec 2021 • Eyke Hüllermeier
Recent applications of machine learning (ML) reveal a noticeable shift from its use for predictive modeling in the sense of a data-driven construction of models mainly used for the purpose of prediction (of ground-truth facts) to its use for prescriptive modeling.
1 code implementation • NeurIPS 2021 • Björn Haddenhorst, Viktor Bengs, Eyke Hüllermeier
The reliable identification of the “best” arm while keeping the sample complexity as low as possible is a common task in the field of multi-armed bandits.
no code implementations • 10 Nov 2021 • Tanja Tornede, Alexander Tornede, Jonas Hanselle, Marcel Wever, Felix Mohr, Eyke Hüllermeier
Therefore, we first elaborate on how to quantify the environmental footprint of an AutoML tool.
1 code implementation • 13 Sep 2021 • Alexander Tornede, Viktor Bengs, Eyke Hüllermeier
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
no code implementations • 10 Sep 2021 • Eyke Hüllermeier, Felix Mohr, Alexander Tornede, Marcel Wever
The notion of bounded rationality originated from the insight that perfectly rational behavior cannot be realized by agents with limited cognitive or computational resources.
1 code implementation • 3 Aug 2021 • Matthias Springstein, Stefanie Schneider, Javad Rahnama, Eyke Hüllermeier, Hubertus Kohle, Ralph Ewerth
In this paper, we introduce iART: an open Web platform for art-historical research that facilitates the process of comparative vision.
no code implementations • 21 Jul 2021 • Mohammad Hossein Shaker, Eyke Hüllermeier
The idea to distinguish and quantify two important types of uncertainty, often referred to as aleatoric and epistemic, has received increasing attention in machine learning research in the last couple of years.
1 code implementation • 20 Jul 2021 • Alexander Tornede, Lukas Gehring, Tanja Tornede, Marcel Wever, Eyke Hüllermeier
The problem of selecting an algorithm that appears most suitable for a specific instance of an algorithmic problem class, such as the Boolean satisfiability problem, is called instance-specific algorithm selection.
no code implementations • 22 Jun 2021 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier
Based on the derivatives computed during training, we dynamically group the labels into a predefined number of bins to impose an upper bound on the dimensionality of the linear system.
1 code implementation • NeurIPS 2021 • Julian Lienen, Eyke Hüllermeier
In our approach, we therefore allow the learner to label instances in the form of credal sets, that is, sets of (candidate) probability distributions.
no code implementations • 15 May 2021 • Marie-Luis Merten, Marcel Wever, Michaela Geierhos, Doris Tophinke, Eyke Hüllermeier
This paper elaborates on the notion of uncertainty in the context of annotation in large text corpora, specifically focusing on (but not limited to) historical languages.
1 code implementation • 18 Apr 2021 • Clemens Damke, Eyke Hüllermeier
Graph neural networks (GNNs) have been successfully applied in many structured data domains, with applications ranging from molecular property prediction to the analysis of social networks.
Ranked #1 on Graph Ranking on ZINC
1 code implementation • 8 Apr 2021 • Michael Dellnitz, Eyke Hüllermeier, Marvin Lücke, Sina Ober-Blöbaum, Christian Offen, Sebastian Peitz, Karlson Pfannschmidt
While the classical schemes apply very generally and are highly efficient on regular systems, they can behave sub-optimal when an inefficient step rejection mechanism is triggered by structurally complex systems such as chaotic systems.
no code implementations • 8 Dec 2020 • Johannes Fürnkranz, Eyke Hüllermeier, Eneldo Loza Mencía, Michael Rapp
Arguably the key reason for the success of deep neural networks is their ability to autonomously form non-linear combinations of the input features, which can be used in subsequent layers of the network.
1 code implementation • 17 Nov 2020 • Alexander Tornede, Marcel Wever, Eyke Hüllermeier
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates most suitable for a specific instance of an algorithmic problem class, where "suitability" often refers to an algorithm's runtime.
no code implementations • 2 Nov 2020 • Viktor Bengs, Eyke Hüllermeier
We consider a resource-aware variant of the classical multi-armed bandit problem: In each round, the learner selects an arm and determines a resource limit.
no code implementations • 2 Nov 2020 • Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencia, Johannes Fürnkranz, Michael Rapp
For evaluating such predictions, the set of predicted labels needs to be compared to the ground-truth label set associated with that instance, and various loss functions have been proposed for this purpose.
1 code implementation • CVPR 2021 • Julian Lienen, Eyke Hüllermeier, Ralph Ewerth, Nils Nommensen
In many real-world applications, the relative depth of objects in an image is crucial for scene understanding.
no code implementations • 25 Aug 2020 • Arunselvan Ramaswamy, Eyke Hüllermeier
Deep Q-Learning is an important reinforcement learning algorithm, which involves training a deep neural network, called Deep Q-Network (DQN), to approximate the well-known Q-function.
1 code implementation • 4 Aug 2020 • Stefan Heid, Marcel Wever, Eyke Hüllermeier
Syntactic annotation of corpora in the form of part-of-speech (POS) tags is a key requirement for both linguistic research and subsequent automated natural language processing (NLP) tasks.
no code implementations • 16 Jul 2020 • Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencia
We advocate the use of conformal prediction (CP) to enhance rule-based multi-label classification (MLC).
no code implementations • 14 Jul 2020 • Karlson Pfannschmidt, Eyke Hüllermeier
We consider the problem of learning to choose from a given set of objects, where each object is represented by a feature vector.
no code implementations • 6 Jul 2020 • Sadegh Abbaszadeh, Eyke Hüllermeier
More specifically, we propose a method for binary classification, in which the Sugeno integral is used as an aggregation function that combines several local evaluations of an instance, pertaining to different features or measurements, into a single global evaluation.
1 code implementation • 6 Jul 2020 • Alexander Tornede, Marcel Wever, Stefan Werner, Felix Mohr, Eyke Hüllermeier
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
1 code implementation • 1 Jul 2020 • Clemens Damke, Vitalik Melnikov, Eyke Hüllermeier
Current GNN architectures use a vertex neighborhood aggregation scheme, which limits their discriminative power to that of the 1-dimensional Weisfeiler-Lehman (WL) graph isomorphism test.
Ranked #6 on Graph Classification on REDDIT-B
1 code implementation • 23 Jun 2020 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Vu-Linh Nguyen, Eyke Hüllermeier
In multi-label classification, where the evaluation of predictions is less straightforward than in single-label classification, various meaningful, though different, loss functions have been proposed.
no code implementations • 21 Jun 2020 • Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz
While a variety of ensemble methods for multilabel classification have been proposed in the literature, the question of how to aggregate the predictions of the individual members of the ensemble has received little attention so far.
no code implementations • 23 May 2020 • Eyke Hüllermeier
Principles of analogical reasoning have recently been applied in the context of machine learning, for example to develop new methods for classification and preference learning.
1 code implementation • 11 May 2020 • Henrik Bode, Stefan Heid, Daniel Weber, Eyke Hüllermeier, Oliver Wallscheid
Micro- and smart grids (MSG) play an important role both for integrating renewable energy sources in conventional electricity grids and for providing power supply in remote areas.
Systems and Control Systems and Control
no code implementations • 11 Feb 2020 • Adil El Mesaoudi-Paul, Viktor Bengs, Eyke Hüllermeier
We consider an extension of the contextual multi-armed bandit problem, in which, instead of selecting a single alternative (arm), a learner is supposed to make a preselection in the form of a subset of alternatives.
1 code implementation • 29 Jan 2020 • Alexander Tornede, Marcel Wever, Eyke Hüllermeier
Algorithm selection (AS) deals with selecting an algorithm from a fixed set of candidate algorithms most suitable for a specific instance of an algorithmic problem, e. g., choosing solvers for SAT problems.
1 code implementation • 3 Jan 2020 • Mohammad Hossein Shaker, Eyke Hüllermeier
In particular, the idea of distinguishing between two important types of uncertainty, often refereed to as aleatoric and epistemic, has recently been studied in the setting of supervised learning.
1 code implementation • 10 Nov 2019 • Ammar Shaker, Eyke Hüllermeier
The problem of adaptive learning from evolving and possibly non-stationary data streams has attracted a lot of interest in machine learning in the recent past, and also stimulated research in related fields, such as computational intelligence and fuzzy systems.
1 code implementation • 21 Oct 2019 • Eyke Hüllermeier, Willem Waegeman
The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology.
no code implementations • 31 Aug 2019 • Vu-Linh Nguyen, Sébastien Destercke, Eyke Hüllermeier
In this paper, we advocate a distinction between two different types of uncertainty, referred to as epistemic and aleatoric, in the context of active learning.
no code implementations • ICML 2020 • Viktor Bengs, Eyke Hüllermeier
To formalize this goal, we introduce a reasonable notion of regret and derive lower bounds on the expected regret.
4 code implementations • 19 Jun 2019 • Thomas Mortier, Marek Wydmuch, Krzysztof Dembczyński, Eyke Hüllermeier, Willem Waegeman
In cases of uncertainty, a multi-class classifier preferably returns a set of candidate classes instead of predicting a single class label with little guarantee.
no code implementations • 7 Jun 2019 • Robin Senge, Juan José del Coz, Eyke Hüllermeier
Classifier chains have recently been proposed as an appealing method for tackling the multi-label classification task.
no code implementations • 19 Apr 2019 • Vu-Linh Nguyen, Eyke Hüllermeier
In contrast to conventional (single-label) classification, the setting of multilabel classification (MLC) allows an instance to belong to several classes simultaneously.
1 code implementation • 29 Jan 2019 • Karlson Pfannschmidt, Pritha Gupta, Björn Haddenhorst, Eyke Hüllermeier
Choice functions accept a set of alternatives as input and produce a preferred subset of these alternatives as output.
no code implementations • 7 Jan 2019 • Mohsen Ahmadi Fahandar, Eyke Hüllermeier
Building on a specific formalization of analogical relationships of the form "A relates to B as C relates to D", we establish a connection between two important subfields of artificial intelligence, namely analogical reasoning and kernel-based machine learning.
1 code implementation • 30 Nov 2018 • Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier, Michael Rapp
Multi-label classification (MLC) is a supervised learning problem in which, contrary to standard multiclass classification, an instance can be associated with several class labels simultaneously.
no code implementations • 9 Nov 2018 • Marcel Wever, Felix Mohr, Eyke Hüllermeier
Automated machine learning (AutoML) has received increasing attention in the recent past.
no code implementations • 30 Jul 2018 • Viktor Bengs, Robert Busa-Fekete, Adil El Mesaoudi-Paul, Eyke Hüllermeier
The aim of this paper is to provide a survey of the state of the art in this field, referred to as preference-based multi-armed bandits or dueling bandits.
1 code implementation • Machine Learning 2018 • Felix Mohr, Marcel Wever, Eyke Hüllermeier
Automated machine learning (AutoML) seeks to automatically select, compose, and parametrize machine learning algorithms, so as to achieve optimal performance on a given task (dataset).
no code implementations • ICML 2018 • Adil El Mesaoudi-Paul, Eyke Hüllermeier, Robert Busa-Fekete
We also introduce a generalization of the model, in which the constraints on pairwise preferences are relaxed, and for which maximum likelihood estimation can be carried out based on a variation of the generalized iterative scaling algorithm.
no code implementations • 15 Jun 2018 • Sascha Henzgen, Eyke Hüllermeier
The problem of frequent pattern mining has been studied quite extensively for various types of data, including sets, sequences, and graphs.
1 code implementation • 15 Mar 2018 • Karlson Pfannschmidt, Pritha Gupta, Eyke Hüllermeier
Object ranking is an important problem in the realm of preference learning.
no code implementations • ICML 2017 • Mohsen Ahmadi Fahandar, Eyke Hüllermeier, Inés Couso
We consider the problem of statistical inference for ranking data, specifically rank aggregation, under the assumption that samples are incomplete in the sense of not comprising all choice alternatives.
no code implementations • 2 Dec 2017 • Eyke Hüllermeier
This paper briefly elaborates on a development in (applied) fuzzy logic that has taken place in the last couple of decades, namely, the complementation or even replacement of the traditional knowledge-based approach to fuzzy rule-based systems design by a data-driven one.
no code implementations • 28 Nov 2017 • Mohsen Ahmadi Fahandar, Eyke Hüllermeier
In this paper, we propose a new approach to object ranking based on principles of analogical reasoning.
2 code implementations • 2 Mar 2017 • Mike Czech, Eyke Hüllermeier, Marie-Christine Jakobs, Heike Wehrheim
Software verification competitions, such as the annual SV-COMP, evaluate software verification tools with respect to their effectivity and efficiency.
no code implementations • NeurIPS 2015 • Róbert Busa-Fekete, Balázs Szörényi, Krzysztof Dembczynski, Eyke Hüllermeier
In this paper, we study the problem of F-measure maximization in the setting of online learning.
no code implementations • NeurIPS 2015 • Balázs Szörényi, Róbert Busa-Fekete, Adil Paul, Eyke Hüllermeier
We study the problem of online rank elicitation, assuming that rankings of a set of alternatives obey the Plackett-Luce distribution.
no code implementations • 17 May 2014 • Michiel Stock, Thomas Fober, Eyke Hüllermeier, Serghei Glinca, Gerhard Klebe, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem Waegeman
For a given query, the search operation results in a ranking of the enzymes in the database, from very similar to dissimilar enzymes, while information about the biological function of annotated database enzymes is ignored.
no code implementations • 3 May 2013 • Eyke Hüllermeier
Methods for analyzing or learning from "fuzzy data" have attracted increasing attention in recent years.
no code implementations • NeurIPS 2012 • Weiwei Cheng, Eyke Hüllermeier, Willem Waegeman, Volkmar Welker
Several machine learning methods allow for abstaining from uncertain predictions.
no code implementations • NeurIPS 2011 • Krzysztof J. Dembczynski, Willem Waegeman, Weiwei Cheng, Eyke Hüllermeier
The F-measure, originally introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction.