Search Results for author: Eyke Hüllermeier

Found 110 papers, 47 papers with code

shapiq: Shapley Interactions for Machine Learning

1 code implementation2 Oct 2024 Maximilian Muschalik, Hubert Baniecki, Fabian Fumagalli, Patrick Kolpaczki, Barbara Hammer, Eyke Hüllermeier

In this work, we introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute SVs and any-order SIs in an application-agnostic framework.

Benchmarking Data Valuation +1

CUQ-GNN: Committee-based Graph Uncertainty Quantification using Posterior Networks

1 code implementation6 Sep 2024 Clemens Damke, Eyke Hüllermeier

In this work, we study the influence of domain-specific characteristics when defining a meaningful notion of predictive uncertainty on graph data.

Node Classification Uncertainty Quantification

Problem Solving Through Human-AI Preference-Based Cooperation

no code implementations14 Aug 2024 Subhabrata Dutta, Timo Kaufmann, Goran Glavaš, Ivan Habernal, Kristian Kersting, Frauke Kreuter, Mira Mezini, Iryna Gurevych, Eyke Hüllermeier, Hinrich Schuetze

While there is a widespread belief that artificial general intelligence (AGI) -- or even superhuman AI -- is imminent, complex problems in expert domains are far from being solved.

Pairwise Difference Learning for Classification

1 code implementation28 Jun 2024 Mohamed Karim Belaid, Maximilian Rabus, Eyke Hüllermeier

Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression.

Binary Classification Classification +2

ALPBench: A Benchmark for Active Learning Pipelines on Tabular Data

1 code implementation25 Jun 2024 Valentin Margraf, Marcel Wever, Sandra Gilhuber, Gabriel Marques Tavares, Thomas Seidl, Eyke Hüllermeier

This particularly holds for the combination of query strategies with different learning algorithms into active learning pipelines and examining the impact of the learning algorithm choice.

Active Learning tabular-classification

Linear Opinion Pooling for Uncertainty Quantification on Graphs

1 code implementation6 Jun 2024 Clemens Damke, Eyke Hüllermeier

Challenging assumptions and postulates of state-of-the-art methods, we propose a novel approach that represents (epistemic) uncertainty in terms of mixtures of Dirichlet distributions and refers to the established principle of linear opinion pooling for propagating information between neighbored nodes in the graph.

Node Classification Uncertainty Quantification

Label-wise Aleatoric and Epistemic Uncertainty Quantification

1 code implementation4 Jun 2024 Yusuf Sale, Paul Hofman, Timo Löhr, Lisa Wimmer, Thomas Nagler, Eyke Hüllermeier

We present a novel approach to uncertainty quantification in classification tasks based on label-wise decomposition of uncertainty measures.

Decision Making Uncertainty Quantification

Inverse Constitutional AI: Compressing Preferences into Principles

1 code implementation2 Jun 2024 Arduin Findeis, Timo Kaufmann, Eyke Hüllermeier, Samuel Albanie, Robert Mullins

In constitutional AI, a set of principles (or constitution) is used to provide feedback and fine-tune AI models.

Chatbot Language Modelling +1

KernelSHAP-IQ: Weighted Least-Square Optimization for Shapley Interactions

no code implementations17 May 2024 Fabian Fumagalli, Maximilian Muschalik, Patrick Kolpaczki, Eyke Hüllermeier, Barbara Hammer

As a result, we propose KernelSHAP-IQ, a direct extension of KernelSHAP for SII, and demonstrate state-of-the-art performance for feature interactions.

Position: Why We Must Rethink Empirical Research in Machine Learning

no code implementations3 May 2024 Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl

We warn against a common but incomplete understanding of empirical research in machine learning that leads to non-replicable results, makes findings unreliable, and threatens to undermine progress in the field.

Position

Quantifying Aleatoric and Epistemic Uncertainty with Proper Scoring Rules

no code implementations18 Apr 2024 Paul Hofman, Yusuf Sale, Eyke Hüllermeier

Uncertainty representation and quantification are paramount in machine learning and constitute an important prerequisite for safety-critical applications.

Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration

no code implementations7 Mar 2024 Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio

We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.

Bayesian Optimization Gaussian Processes

Conformalized Credal Set Predictors

1 code implementation16 Feb 2024 Alireza Javanmardi, David Stutz, Eyke Hüllermeier

Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution.

Conformal Prediction Natural Language Inference +1

Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?

1 code implementation14 Feb 2024 Mira Jürgens, Nis Meinert, Viktor Bengs, Eyke Hüllermeier, Willem Waegeman

Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty.

Deep Learning

Information Leakage Detection through Approximate Bayes-optimal Prediction

1 code implementation25 Jan 2024 Pritha Gupta, Marcel Wever, Eyke Hüllermeier

Though effective, emerging supervised machine learning based approaches to detect ILs are limited to binary system sensitive information and lack a comprehensive framework.

AutoML Learning Theory

Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles

1 code implementation22 Jan 2024 Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier

While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Second-Order Uncertainty Quantification: Variance-Based Measures

no code implementations30 Dec 2023 Yusuf Sale, Paul Hofman, Lisa Wimmer, Eyke Hüllermeier, Thomas Nagler

Uncertainty quantification is a critical aspect of machine learning models, providing important insights into the reliability of predictions and aiding the decision-making process in real-world applications.

Decision Making Uncertainty Quantification

A Survey of Reinforcement Learning from Human Feedback

no code implementations22 Dec 2023 Timo Kaufmann, Paul Weng, Viktor Bengs, Eyke Hüllermeier

Reinforcement learning from human feedback (RLHF) is a variant of reinforcement learning (RL) that learns from human feedback instead of relying on an engineered reward function.

reinforcement-learning Reinforcement Learning +2

Second-Order Uncertainty Quantification: A Distance-Based Approach

no code implementations2 Dec 2023 Yusuf Sale, Viktor Bengs, Michele Caprio, Eyke Hüllermeier

In the past couple of years, various approaches to representing and quantifying different types of predictive uncertainty in machine learning, notably in the setting of classification, have been proposed on the basis of second-order probability distributions, i. e., predictions in the form of distributions on probability distributions.

Uncertainty Quantification

Identifying Copeland Winners in Dueling Bandits with Indifferences

no code implementations1 Oct 2023 Viktor Bengs, Björn Haddenhorst, Eyke Hüllermeier

We consider the task of identifying the Copeland winner(s) in a dueling bandits problem with ternary feedback.

Probabilistic Self-supervised Learning via Scoring Rules Minimization

no code implementations5 Sep 2023 Amirhossein Vahidi, Simon Schoßer, Lisa Wimmer, Yawei Li, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei

In this paper, we propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN), which leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations.

Knowledge Distillation Out-of-Distribution Detection +3

Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning

no code implementations28 Aug 2023 Amirhossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei

Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning.

Diversity Ensemble Learning +2

Weighting by Tying: A New Approach to Weighted Rank Correlation

no code implementations21 Aug 2023 Sascha Henzgen, Eyke Hüllermeier

Measures of rank correlation are commonly used in statistics to capture the degree of concordance between two orderings of the same set of items.

A Novel Bayes' Theorem for Upper Probabilities

no code implementations13 Jul 2023 Michele Caprio, Yusuf Sale, Eyke Hüllermeier, Insup Lee

In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$, when the prior lies in a class of probability measures $\mathcal{P}$ and the likelihood is precise.

Model Predictive Control

Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?

no code implementations16 Jun 2023 Yusuf Sale, Michele Caprio, Eyke Hüllermeier

Adequate uncertainty representation and quantification have become imperative in various scientific disciplines, especially in machine learning and artificial intelligence.

Binary Classification Multi-class Classification

iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios

1 code implementation13 Jun 2023 Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier

Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-box machine learning models.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Conformal Prediction with Partially Labeled Data

1 code implementation1 Jun 2023 Alireza Javanmardi, Yusuf Sale, Paul Hofman, Eyke Hüllermeier

While the predictions produced by conformal prediction are set-valued, the data used for training and calibration is supposed to be precise.

Conformal Prediction Weakly-supervised Learning

Koopman Kernel Regression

1 code implementation NeurIPS 2023 Petar Bevanda, Max Beier, Armin Lederer, Stefan Sosnowski, Eyke Hüllermeier, Sandra Hirche

Many machine learning approaches for decision making, such as reinforcement learning, rely on simulators or predictive models to forecast the time-evolution of quantities of interest, e. g., the state of an agent or the reward of a policy.

Decision Making regression

Mitigating Label Noise through Data Ambiguation

1 code implementation23 May 2023 Julian Lienen, Eyke Hüllermeier

Label noise poses an important challenge in machine learning, especially in deep learning, in which large models with high expressive power dominate the field.

Memorization

Optimizing Data Shapley Interaction Calculation from O(2^n) to O(t n^2) for KNN models

no code implementations2 Apr 2023 Mohamed Karim Belaid, Dorra El Mekki, Maximilian Rabus, Eyke Hüllermeier

With the rapid growth of data availability and usage, quantifying the added value of each training data point has become a crucial process in the field of artificial intelligence.

Data Interaction

iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams

no code implementations2 Mar 2023 Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier

Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2

Iterative Deepening Hyperband

1 code implementation1 Feb 2023 Jasmin Brandt, Marcel Wever, Dimitrios Iliadis, Viktor Bengs, Eyke Hüllermeier

Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm.

Hyperparameter Optimization

Approximating the Shapley Value without Marginal Contributions

no code implementations1 Feb 2023 Patrick Kolpaczki, Viktor Bengs, Maximilian Muschalik, Eyke Hüllermeier

The Shapley value, which is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, has recently been used intensively in explainable artificial intelligence.

Explainable artificial intelligence

On Second-Order Scoring Rules for Epistemic Uncertainty Quantification

no code implementations30 Jan 2023 Viktor Bengs, Eyke Hüllermeier, Willem Waegeman

In this paper, we generalise these findings and prove a more fundamental result: There seems to be no loss function that provides an incentive for a second-order learner to faithfully represent its epistemic uncertainty in the same manner as proper scoring rules do for standard (first-order) learners.

Uncertainty Quantification

Conformal Prediction Intervals for Remaining Useful Lifetime Estimation

1 code implementation30 Dec 2022 Alireza Javanmardi, Eyke Hüllermeier

The main objective of Prognostics and Health Management is to estimate the Remaining Useful Lifetime (RUL), namely, the time that a system or a piece of equipment is still in working order before starting to function incorrectly.

Conformal Prediction Management +3

AC-Band: A Combinatorial Bandit-Based Approach to Algorithm Configuration

1 code implementation1 Dec 2022 Jasmin Brandt, Elias Schede, Viktor Bengs, Björn Haddenhorst, Eyke Hüllermeier, Kevin Tierney

We study the algorithm configuration (AC) problem, in which one seeks to find an optimal parameter configuration of a given target algorithm in an automated way.

Multi-Armed Bandits

Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are Conditional Entropy and Mutual Information Appropriate Measures?

1 code implementation7 Sep 2022 Lisa Wimmer, Yusuf Sale, Paul Hofman, Bern Bischl, Eyke Hüllermeier

The quantification of aleatoric and epistemic uncertainty in terms of conditional entropy and mutual information, respectively, has recently become quite common in machine learning.

Uncertainty Quantification

Memorization-Dilation: Modeling Neural Collapse Under Label Noise

1 code implementation11 Jun 2022 Duc Anh Nguyen, Ron Levie, Julian Lienen, Gitta Kutyniok, Eyke Hüllermeier

The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems.

Memorization

Conformal Credal Self-Supervised Learning

1 code implementation30 May 2022 Julian Lienen, Caglar Demir, Eyke Hüllermeier

One such method, so-called credal self-supervised learning, maintains pseudo-supervision in the form of sets of (instead of single) probability distributions over labels, thereby allowing for a flexible yet uncertainty-aware labeling.

Conformal Prediction Self-Supervised Learning

On the Calibration of Probabilistic Classifier Sets

no code implementations20 May 2022 Thomas Mortier, Viktor Bengs, Eyke Hüllermeier, Stijn Luca, Willem Waegeman

In this paper, we extend the notion of calibration, which is commonly used to evaluate the validity of the aleatoric uncertainty representation of a single probabilistic classifier, to assess the validity of an epistemic uncertainty representation obtained by sets of probabilistic classifiers.

Ensemble Learning Multi-class Classification

Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation

no code implementations11 Mar 2022 Viktor Bengs, Eyke Hüllermeier, Willem Waegeman

Uncertainty quantification has received increasing attention in machine learning in the recent past.

Uncertainty Quantification

Stochastic Contextual Dueling Bandits under Linear Stochastic Transitivity Models

no code implementations9 Feb 2022 Viktor Bengs, Aadirupa Saha, Eyke Hüllermeier

In every round of the sequential decision problem, the learner makes a context-dependent selection of two choice alternatives (arms) to be compared with each other and receives feedback in the form of noisy preference information.

Finding Optimal Arms in Non-stochastic Combinatorial Bandits with Semi-bandit Feedback and Finite Budget

no code implementations9 Feb 2022 Jasmin Brandt, Viktor Bengs, Björn Haddenhorst, Eyke Hüllermeier

We consider the combinatorial bandits problem with semi-bandit feedback under finite sampling budget constraints, in which the learner can carry out its action only for a limited number of times specified by an overall budget.

A Survey of Methods for Automated Algorithm Configuration

no code implementations3 Feb 2022 Elias Schede, Jasmin Brandt, Alexander Tornede, Marcel Wever, Viktor Bengs, Eyke Hüllermeier, Kevin Tierney

We review existing AC literature within the lens of our taxonomies, outline relevant design choices of configuration approaches, contrast methods and problem variants against each other, and describe the state of AC in industry.

Survey

Non-Stationary Dueling Bandits

no code implementations2 Feb 2022 Patrick Kolpaczki, Viktor Bengs, Eyke Hüllermeier

We propose the $\mathrm{Beat\, the\, Winner\, Reset}$ algorithm and prove a bound on its expected binary weak regret in the stationary case, which tightens the bound of current state-of-art algorithms.

Prescriptive Machine Learning for Automated Decision Making: Challenges and Opportunities

no code implementations15 Dec 2021 Eyke Hüllermeier

Recent applications of machine learning (ML) reveal a noticeable shift from its use for predictive modeling in the sense of a data-driven construction of models mainly used for the purpose of prediction (of ground-truth facts) to its use for prescriptive modeling.

BIG-bench Machine Learning Decision Making +1

Identification of the Generalized Condorcet Winner in Multi-dueling Bandits

1 code implementation NeurIPS 2021 Björn Haddenhorst, Viktor Bengs, Eyke Hüllermeier

The reliable identification of the “best” arm while keeping the sample complexity as low as possible is a common task in the field of multi-armed bandits.

Multi-Armed Bandits

Machine Learning for Online Algorithm Selection under Censored Feedback

1 code implementation13 Sep 2021 Alexander Tornede, Viktor Bengs, Eyke Hüllermeier

In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.

BIG-bench Machine Learning Thompson Sampling

Automated Machine Learning, Bounded Rationality, and Rational Metareasoning

no code implementations10 Sep 2021 Eyke Hüllermeier, Felix Mohr, Alexander Tornede, Marcel Wever

The notion of bounded rationality originated from the insight that perfectly rational behavior cannot be realized by agents with limited cognitive or computational resources.

AutoML BIG-bench Machine Learning

Ensemble-based Uncertainty Quantification: Bayesian versus Credal Inference

no code implementations21 Jul 2021 Mohammad Hossein Shaker, Eyke Hüllermeier

The idea to distinguish and quantify two important types of uncertainty, often referred to as aleatoric and epistemic, has received increasing attention in machine learning research in the last couple of years.

Ensemble Learning Uncertainty Quantification

Algorithm Selection on a Meta Level

1 code implementation20 Jul 2021 Alexander Tornede, Lukas Gehring, Tanja Tornede, Marcel Wever, Eyke Hüllermeier

The problem of selecting an algorithm that appears most suitable for a specific instance of an algorithmic problem class, such as the Boolean satisfiability problem, is called instance-specific algorithm selection.

Ensemble Learning Meta-Learning

Gradient-based Label Binning in Multi-label Classification

no code implementations22 Jun 2021 Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier

Based on the derivatives computed during training, we dynamically group the labels into a predefined number of bins to impose an upper bound on the dimensionality of the linear system.

Classification Multi-Label Classification

Credal Self-Supervised Learning

1 code implementation NeurIPS 2021 Julian Lienen, Eyke Hüllermeier

In our approach, we therefore allow the learner to label instances in the form of credal sets, that is, sets of (candidate) probability distributions.

Self-Supervised Learning

Annotation Uncertainty in the Context of Grammatical Change

no code implementations15 May 2021 Marie-Luis Merten, Marcel Wever, Michaela Geierhos, Doris Tophinke, Eyke Hüllermeier

This paper elaborates on the notion of uncertainty in the context of annotation in large text corpora, specifically focusing on (but not limited to) historical languages.

Ranking Structured Objects with Graph Neural Networks

1 code implementation18 Apr 2021 Clemens Damke, Eyke Hüllermeier

Graph neural networks (GNNs) have been successfully applied in many structured data domains, with applications ranging from molecular property prediction to the analysis of social networks.

Graph Ranking Graph Regression +3

Efficient time stepping for numerical integration using reinforcement learning

1 code implementation8 Apr 2021 Michael Dellnitz, Eyke Hüllermeier, Marvin Lücke, Sina Ober-Blöbaum, Christian Offen, Sebastian Peitz, Karlson Pfannschmidt

While the classical schemes apply very generally and are highly efficient on regular systems, they can behave sub-optimal when an inefficient step rejection mechanism is triggered by structurally complex systems such as chaotic systems.

Meta-Learning Numerical Integration +3

Learning Structured Declarative Rule Sets -- A Challenge for Deep Discrete Learning

no code implementations8 Dec 2020 Johannes Fürnkranz, Eyke Hüllermeier, Eneldo Loza Mencía, Michael Rapp

Arguably the key reason for the success of deep neural networks is their ability to autonomously form non-linear combinations of the input features, which can be used in subsequent layers of the network.

Position

Towards Meta-Algorithm Selection

1 code implementation17 Nov 2020 Alexander Tornede, Marcel Wever, Eyke Hüllermeier

Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates most suitable for a specific instance of an algorithmic problem class, where "suitability" often refers to an algorithm's runtime.

Multi-Armed Bandits with Censored Consumption of Resources

no code implementations2 Nov 2020 Viktor Bengs, Eyke Hüllermeier

We consider a resource-aware variant of the classical multi-armed bandit problem: In each round, the learner selects an arm and determines a resource limit.

Multi-Armed Bandits

A Flexible Class of Dependence-aware Multi-Label Loss Functions

no code implementations2 Nov 2020 Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencia, Johannes Fürnkranz, Michael Rapp

For evaluating such predictions, the set of predicted labels needs to be compared to the ground-truth label set associated with that instance, and various loss functions have been proposed for this purpose.

Multi-Label Classification

Deep Q-Learning: Theoretical Insights from an Asymptotic Analysis

no code implementations25 Aug 2020 Arunselvan Ramaswamy, Eyke Hüllermeier

Deep Q-Learning is an important reinforcement learning algorithm, which involves training a deep neural network, called Deep Q-Network (DQN), to approximate the well-known Q-function.

Decision Making Q-Learning

Reliable Part-of-Speech Tagging of Historical Corpora through Set-Valued Prediction

1 code implementation4 Aug 2020 Stefan Heid, Marcel Wever, Eyke Hüllermeier

Syntactic annotation of corpora in the form of part-of-speech (POS) tags is a key requirement for both linguistic research and subsequent automated natural language processing (NLP) tasks.

Part-Of-Speech Tagging POS +2

Conformal Rule-Based Multi-label Classification

no code implementations16 Jul 2020 Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencia

We advocate the use of conformal prediction (CP) to enhance rule-based multi-label classification (MLC).

Classification Conformal Prediction +3

Learning Choice Functions via Pareto-Embeddings

no code implementations14 Jul 2020 Karlson Pfannschmidt, Eyke Hüllermeier

We consider the problem of learning to choose from a given set of objects, where each object is represented by a feature vector.

Machine Learning with the Sugeno Integral: The Case of Binary Classification

no code implementations6 Jul 2020 Sadegh Abbaszadeh, Eyke Hüllermeier

More specifically, we propose a method for binary classification, in which the Sugeno integral is used as an aggregation function that combines several local evaluations of an instance, pertaining to different features or measurements, into a single global evaluation.

BIG-bench Machine Learning Binary Classification +1

Run2Survive: A Decision-theoretic Approach to Algorithm Selection based on Survival Analysis

1 code implementation6 Jul 2020 Alexander Tornede, Marcel Wever, Stefan Werner, Felix Mohr, Eyke Hüllermeier

In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.

Survival Analysis

A Novel Higher-order Weisfeiler-Lehman Graph Convolution

1 code implementation1 Jul 2020 Clemens Damke, Vitalik Melnikov, Eyke Hüllermeier

Current GNN architectures use a vertex neighborhood aggregation scheme, which limits their discriminative power to that of the 1-dimensional Weisfeiler-Lehman (WL) graph isomorphism test.

Graph Classification

Learning Gradient Boosted Multi-label Classification Rules

1 code implementation23 Jun 2020 Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Vu-Linh Nguyen, Eyke Hüllermeier

In multi-label classification, where the evaluation of predictions is less straightforward than in single-label classification, various meaningful, though different, loss functions have been proposed.

Classification General Classification +1

On Aggregation in Ensembles of Multilabel Classifiers

no code implementations21 Jun 2020 Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz

While a variety of ensemble methods for multilabel classification have been proposed in the literature, the question of how to aggregate the predictions of the individual members of the ensemble has received little attention so far.

General Classification

Towards Analogy-Based Explanations in Machine Learning

no code implementations23 May 2020 Eyke Hüllermeier

Principles of analogical reasoning have recently been applied in the context of machine learning, for example to develop new methods for classification and preference learning.

BIG-bench Machine Learning Interpretable Machine Learning

Towards a Scalable and Flexible Simulation and Testing Environment Toolbox for Intelligent Microgrid Control

1 code implementation11 May 2020 Henrik Bode, Stefan Heid, Daniel Weber, Eyke Hüllermeier, Oliver Wallscheid

Micro- and smart grids (MSG) play an important role both for integrating renewable energy sources in conventional electricity grids and for providing power supply in remote areas.

Systems and Control Systems and Control

Online Preselection with Context Information under the Plackett-Luce Model

no code implementations11 Feb 2020 Adil El Mesaoudi-Paul, Viktor Bengs, Eyke Hüllermeier

We consider an extension of the contextual multi-armed bandit problem, in which, instead of selecting a single alternative (arm), a learner is supposed to make a preselection in the form of a subset of alternatives.

Extreme Algorithm Selection With Dyadic Feature Representation

1 code implementation29 Jan 2020 Alexander Tornede, Marcel Wever, Eyke Hüllermeier

Algorithm selection (AS) deals with selecting an algorithm from a fixed set of candidate algorithms most suitable for a specific instance of an algorithmic problem, e. g., choosing solvers for SAT problems.

Hyperparameter Optimization Meta-Learning

Aleatoric and Epistemic Uncertainty with Random Forests

1 code implementation3 Jan 2020 Mohammad Hossein Shaker, Eyke Hüllermeier

In particular, the idea of distinguishing between two important types of uncertainty, often refereed to as aleatoric and epistemic, has recently been studied in the setting of supervised learning.

BIG-bench Machine Learning

TSK-Streams: Learning TSK Fuzzy Systems on Data Streams

1 code implementation10 Nov 2019 Ammar Shaker, Eyke Hüllermeier

The problem of adaptive learning from evolving and possibly non-stationary data streams has attracted a lot of interest in machine learning in the recent past, and also stimulated research in related fields, such as computational intelligence and fuzzy systems.

regression

Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods

1 code implementation21 Oct 2019 Eyke Hüllermeier, Willem Waegeman

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology.

BIG-bench Machine Learning

Epistemic Uncertainty Sampling

no code implementations31 Aug 2019 Vu-Linh Nguyen, Sébastien Destercke, Eyke Hüllermeier

In this paper, we advocate a distinction between two different types of uncertainty, referred to as epistemic and aleatoric, in the context of active learning.

Active Learning

Preselection Bandits

no code implementations ICML 2020 Viktor Bengs, Eyke Hüllermeier

To formalize this goal, we introduce a reasonable notion of regret and derive lower bounds on the expected regret.

Efficient Set-Valued Prediction in Multi-Class Classification

4 code implementations19 Jun 2019 Thomas Mortier, Marek Wydmuch, Krzysztof Dembczyński, Eyke Hüllermeier, Willem Waegeman

In cases of uncertainty, a multi-class classifier preferably returns a set of candidate classes instead of predicting a single class label with little guarantee.

Classification General Classification +1

Rectifying Classifier Chains for Multi-Label Classification

no code implementations7 Jun 2019 Robin Senge, Juan José del Coz, Eyke Hüllermeier

Classifier chains have recently been proposed as an appealing method for tackling the multi-label classification task.

Attribute Classification +2

Reliable Multi-label Classification: Prediction with Partial Abstention

no code implementations19 Apr 2019 Vu-Linh Nguyen, Eyke Hüllermeier

In contrast to conventional (single-label) classification, the setting of multilabel classification (MLC) allows an instance to belong to several classes simultaneously.

Classification General Classification +1

Learning Context-Dependent Choice Functions

1 code implementation29 Jan 2019 Karlson Pfannschmidt, Pritha Gupta, Björn Haddenhorst, Eyke Hüllermeier

Choice functions accept a set of alternatives as input and produce a preferred subset of these alternatives as output.

Analogy-Based Preference Learning with Kernels

no code implementations7 Jan 2019 Mohsen Ahmadi Fahandar, Eyke Hüllermeier

Building on a specific formalization of analogical relationships of the form "A relates to B as C relates to D", we establish a connection between two important subfields of artificial intelligence, namely analogical reasoning and kernel-based machine learning.

Learning Interpretable Rules for Multi-label Classification

1 code implementation30 Nov 2018 Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier, Michael Rapp

Multi-label classification (MLC) is a supervised learning problem in which, contrary to standard multiclass classification, an instance can be associated with several class labels simultaneously.

Classification General Classification +1

Preference-based Online Learning with Dueling Bandits: A Survey

no code implementations30 Jul 2018 Viktor Bengs, Robert Busa-Fekete, Adil El Mesaoudi-Paul, Eyke Hüllermeier

The aim of this paper is to provide a survey of the state of the art in this field, referred to as preference-based multi-armed bandits or dueling bandits.

Multi-Armed Bandits Survey

ML-Plan: Automated machine learning via hierarchical planning

1 code implementation Machine Learning 2018 Felix Mohr, Marcel Wever, Eyke Hüllermeier

Automated machine learning (AutoML) seeks to automatically select, compose, and parametrize machine learning algorithms, so as to achieve optimal performance on a given task (dataset).

AutoML BIG-bench Machine Learning

Ranking Distributions based on Noisy Sorting

no code implementations ICML 2018 Adil El Mesaoudi-Paul, Eyke Hüllermeier, Robert Busa-Fekete

We also introduce a generalization of the model, in which the constraints on pairwise preferences are relaxed, and for which maximum likelihood estimation can be carried out based on a variation of the generalized iterative scaling algorithm.

Mining Rank Data

no code implementations15 Jun 2018 Sascha Henzgen, Eyke Hüllermeier

The problem of frequent pattern mining has been studied quite extensively for various types of data, including sets, sequences, and graphs.

Statistical Inference for Incomplete Ranking Data: The Case of Rank-Dependent Coarsening

no code implementations ICML 2017 Mohsen Ahmadi Fahandar, Eyke Hüllermeier, Inés Couso

We consider the problem of statistical inference for ranking data, specifically rank aggregation, under the assumption that samples are incomplete in the sense of not comprising all choice alternatives.

From knowledge-based to data-driven modeling of fuzzy rule-based systems: A critical reflection

no code implementations2 Dec 2017 Eyke Hüllermeier

This paper briefly elaborates on a development in (applied) fuzzy logic that has taken place in the last couple of decades, namely, the complementation or even replacement of the traditional knowledge-based approach to fuzzy rule-based systems design by a data-driven one.

Learning to Rank based on Analogical Reasoning

no code implementations28 Nov 2017 Mohsen Ahmadi Fahandar, Eyke Hüllermeier

In this paper, we propose a new approach to object ranking based on principles of analogical reasoning.

Learning-To-Rank Object +1

Predicting Rankings of Software Verification Competitions

2 code implementations2 Mar 2017 Mike Czech, Eyke Hüllermeier, Marie-Christine Jakobs, Heike Wehrheim

Software verification competitions, such as the annual SV-COMP, evaluate software verification tools with respect to their effectivity and efficiency.

Online F-Measure Optimization

no code implementations NeurIPS 2015 Róbert Busa-Fekete, Balázs Szörényi, Krzysztof Dembczynski, Eyke Hüllermeier

In this paper, we study the problem of F-measure maximization in the setting of online learning.

Online Rank Elicitation for Plackett-Luce: A Dueling Bandits Approach

no code implementations NeurIPS 2015 Balázs Szörényi, Róbert Busa-Fekete, Adil Paul, Eyke Hüllermeier

We study the problem of online rank elicitation, assuming that rankings of a set of alternatives obey the Plackett-Luce distribution.

Identification of functionally related enzymes by learning-to-rank methods

no code implementations17 May 2014 Michiel Stock, Thomas Fober, Eyke Hüllermeier, Serghei Glinca, Gerhard Klebe, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem Waegeman

For a given query, the search operation results in a ranking of the enzymes in the database, from very similar to dissimilar enzymes, while information about the biological function of annotated database enzymes is ignored.

Learning-To-Rank

An Exact Algorithm for F-Measure Maximization

no code implementations NeurIPS 2011 Krzysztof J. Dembczynski, Willem Waegeman, Weiwei Cheng, Eyke Hüllermeier

The F-measure, originally introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction.

Binary Classification Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.