Search Results for author: Johannes Fürnkranz

Found 30 papers, 10 papers with code

A Comparison of Contextual and Non-Contextual Preference Ranking for Set Addition Problems

no code implementations9 Jul 2021 Timo Bertram, Johannes Fürnkranz, Martin Müller

We discuss and compare two different Siamese network architectures for this task: a twin network that compares the two sets resulting after the addition, and a triplet network that models the contribution of each candidate to the existing set.

Gradient-based Label Binning in Multi-label Classification

1 code implementation22 Jun 2021 Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier

Based on the derivatives computed during training, we dynamically group the labels into a predefined number of bins to impose an upper bound on the dimensionality of the linear system.

Classification Multi-Label Classification

An Investigation into Mini-Batch Rule Learning

no code implementations18 Jun 2021 Florian Beck, Johannes Fürnkranz

We investigate whether it is possible to learn rule sets efficiently in a network structure with a single hidden layer using iterative refinements over mini-batches of examples.

An Empirical Investigation into Deep and Shallow Rule Learning

no code implementations18 Jun 2021 Florian Beck, Johannes Fürnkranz

Inductive rule learning is arguably among the most traditional paradigms in machine learning.

Predicting Human Card Selection in Magic: The Gathering with Contextual Preference Ranking

1 code implementation25 May 2021 Timo Bertram, Johannes Fürnkranz, Martin Müller

Drafting, i. e., the selection of a subset of items from a larger candidate set, is a key element of many games and related problems.

Card Games

Elliptical Ordinal Embedding

no code implementations21 May 2021 Aïssatou Diallo, Johannes Fürnkranz

Typically, each object is mapped onto a point vector in a low dimensional metric space.

Revisiting Non-Specific Syndromic Surveillance

1 code implementation28 Jan 2021 Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz

Infectious disease surveillance is of great importance for the prevention of major outbreaks.

Ordinal Monte Carlo Tree Search

no code implementations26 Jan 2021 Tobias Joppen, Johannes Fürnkranz

In this paper we take a look at MCTS, a popular algorithm to solve MDPs, highlight a reoccurring problem concerning its use of rewards, and show that an ordinal treatment of the rewards overcomes this problem.

Learning Structured Declarative Rule Sets -- A Challenge for Deep Discrete Learning

no code implementations8 Dec 2020 Johannes Fürnkranz, Eyke Hüllermeier, Eneldo Loza Mencía, Michael Rapp

Arguably the key reason for the success of deep neural networks is their ability to autonomously form non-linear combinations of the input features, which can be used in subsequent layers of the network.

A Flexible Class of Dependence-aware Multi-Label Loss Functions

no code implementations2 Nov 2020 Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencia, Johannes Fürnkranz, Michael Rapp

For evaluating such predictions, the set of predicted labels needs to be compared to the ground-truth label set associated with that instance, and various loss functions have been proposed for this purpose.

Multi-Label Classification

Conformal Rule-Based Multi-label Classification

no code implementations16 Jul 2020 Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencia

We advocate the use of conformal prediction (CP) to enhance rule-based multi-label classification (MLC).

Classification Decision Making +2

Learning Gradient Boosted Multi-label Classification Rules

1 code implementation23 Jun 2020 Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Vu-Linh Nguyen, Eyke Hüllermeier

In multi-label classification, where the evaluation of predictions is less straightforward than in single-label classification, various meaningful, though different, loss functions have been proposed.

Classification General Classification +1

On Aggregation in Ensembles of Multilabel Classifiers

no code implementations21 Jun 2020 Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz

While a variety of ensemble methods for multilabel classification have been proposed in the literature, the question of how to aggregate the predictions of the individual members of the ensemble has received little attention so far.

General Classification

Simplifying Random Forests: On the Trade-off between Interpretability and Accuracy

no code implementations11 Nov 2019 Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz

We analyze the trade-off between model complexity and accuracy for random forests by breaking the trees up into individual classification rules and selecting a subset of them.

General Classification

Advances in Machine Learning for the Behavioral Sciences

no code implementations8 Nov 2019 Tomáš Kliegr, Štěpán Bahník, Johannes Fürnkranz

The areas of machine learning and knowledge discovery in databases have considerably matured in recent years.

Learning to play the Chess Variant Crazyhouse above World Champion Level with Deep Neural Networks and Human Data

2 code implementations19 Aug 2019 Johannes Czech, Moritz Willig, Alena Beyer, Kristian Kersting, Johannes Fürnkranz

Crazyhouse is a game with a higher branching factor than chess and there is only limited data of lower quality available compared to AlphaGo.

Board Games

On the Trade-off Between Consistency and Coverage in Multi-label Rule Learning Heuristics

1 code implementation8 Aug 2019 Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz

Many rule learning algorithms employ a heuristic-guided search for rules that model regularities contained in the training data and it is commonly accepted that the choice of the heuristic has a significant impact on the predictive performance of the learner.

Multi-Label Classification

Improving Outbreak Detection with Stacking of Statistical Surveillance Methods

no code implementations17 Jul 2019 Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz

Our results on synthetic data show that it is challenging to improve the performance with a trainable fusion method based on machine learning.

Ordinal Bucketing for Game Trees using Dynamic Quantile Approximation

no code implementations31 May 2019 Tobias Joppen, Tilman Strübig, Johannes Fürnkranz

In this paper, we present a simple and cheap ordinal bucketing algorithm that approximately generates $q$-quantiles from an incremental data stream.

Deep Ordinal Reinforcement Learning

1 code implementation6 May 2019 Alexander Zap, Tobias Joppen, Johannes Fürnkranz

Reinforcement learning usually makes use of numerical rewards, which have nice properties but also come with drawbacks and difficulties.

OpenAI Gym Q-Learning

Ordinal Monte Carlo Tree Search

no code implementations14 Jan 2019 Tobias Joppen, Johannes Fürnkranz

In many problem settings, most notably in game playing, an agent receives a possibly delayed reward for its actions.

Learning Interpretable Rules for Multi-label Classification

1 code implementation30 Nov 2018 Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier, Michael Rapp

Multi-label classification (MLC) is a supervised learning problem in which, contrary to standard multiclass classification, an instance can be associated with several class labels simultaneously.

Classification General Classification +1

Preference-Based Monte Carlo Tree Search

no code implementations17 Jul 2018 Tobias Joppen, Christian Wirth, Johannes Fürnkranz

To deal with such cases, the experimenter has to supply an additional numeric feedback signal in the form of a heuristic, which intrinsically guides the agent.

A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models

no code implementations9 Apr 2018 Tomáš Kliegr, Štěpán Bahník, Johannes Fürnkranz

While the interpretability of machine learning models is often equated with their mere syntactic comprehensibility, we think that interpretability goes beyond that, and that human interpretability should also be investigated from the point of view of cognitive science.

Interpretable Machine Learning

On Cognitive Preferences and the Plausibility of Rule-based Models

1 code implementation4 Mar 2018 Johannes Fürnkranz, Tomáš Kliegr, Heiko Paulheim

It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones.

On Learning Vector Representations in Hierarchical Label Spaces

no code implementations22 Dec 2014 Jinseok Nam, Johannes Fürnkranz

We present a novel method to learn vector representations of a label space given a hierarchy of labels and label co-occurrence patterns.

General Classification Multi-Label Classification

Large-scale Multi-label Text Classification - Revisiting Neural Networks

no code implementations19 Dec 2013 Jinseok Nam, Jungi Kim, Eneldo Loza Mencía, Iryna Gurevych, Johannes Fürnkranz

Neural networks have recently been proposed for multi-label classification because they are able to capture and model label dependencies in the output layer.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.