no code implementations • 21 Nov 2023 • Xuan Zhao, Simone Fabbrizzi, Paula Reyero Lobo, Siamak Ghodsi, Klaus Broelemann, Steffen Staab, Gjergji Kasneci
To balance the data distribution between the majority and the minority groups, our approach deemphasizes samples from the majority group.
no code implementations • 17 Nov 2023 • Xuan Zhao, Klaus Broelemann, Salvatore Ruggieri, Gjergji Kasneci
The two neural networks can approximate the causal model of the data, and the causal model of interventions.
no code implementations • 14 Nov 2023 • Xuan Zhao, Klaus Broelemann, Gjergji Kasneci
In this paper, we introduce a novel method to generate CEs for a pre-trained regressor by first disentangling the label-relevant from the label-irrelevant dimensions in the latent space.
no code implementations • 22 Aug 2023 • Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann
Here, we propose a distributionally invariant version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance.
no code implementations • 25 Jul 2023 • Xuan Zhao, Klaus Broelemann, Gjergji Kasneci
In this paper, we introduce a new method to generate CEs for a pre-trained binary classifier by first shaping the latent space of an autoencoder to be a mixture of Gaussian distributions.
no code implementations • 14 Mar 2023 • Carlos Mougan, Klaus Broelemann, David Masip, Gjergji Kasneci, Thanassis Thiropanis, Steffen Staab
Then, state-of-the-art techniques model input data distributions or model prediction distributions and try to understand issues regarding the interactions between learned models and shifting distributions.
no code implementations • 22 Oct 2022 • Carlos Mougan, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, Steffen Staab
We provide a mathematical analysis of different types of distribution shifts as well as synthetic experimental examples.
1 code implementation • 30 Mar 2022 • Johannes Haug, Klaus Broelemann, Gjergji Kasneci
Dynamic Model Trees are thus a powerful online learning framework that contributes to more lightweight and interpretable machine learning in data streams.
no code implementations • 29 Sep 2021 • Vadim Borisov, Klaus Broelemann, Enkelejda Kasneci, Gjergji. Kasneci
Although deep neural networks (DNNs) constitute the state-of-the-art in many tasks based on image, audio, or text data, their performance on heterogeneous, tabular data is typically inferior to that of decision tree ensembles.
no code implementations • 23 Jun 2020 • Martin Pawelczyk, Klaus Broelemann, Gjergji Kasneci
In this work, we derive a general upper bound for the costs of counterfactual explanations under predictive multiplicity.
1 code implementation • 18 Jun 2020 • Johannes Haug, Martin Pawelczyk, Klaus Broelemann, Gjergji Kasneci
Feature selection can be a crucial factor in obtaining robust and accurate predictions.
3 code implementations • 21 Oct 2019 • Martin Pawelczyk, Johannes Haug, Klaus Broelemann, Gjergji Kasneci
On one hand, we suggest to complement the catalogue of counterfactual quality measures [1] using a criterion to quantify the degree of difficulty for a certain counterfactual suggestion.
no code implementations • 25 Sep 2018 • Klaus Broelemann, Gjergji Kasneci
We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the transparency of the original models.
no code implementations • 27 Jul 2018 • Klaus Broelemann, Gjergji Kasneci
Latent truth discovery, LTD for short, refers to the problem of aggregating ltiple claims from various sources in order to estimate the plausibility of atements about entities.
no code implementations • 31 Dec 2017 • Klaus Broelemann, Thomas Gottron, Gjergji Kasneci
Despite a multitude of algorithms to address the LTD problem that can be found in literature, only little is known about their overall performance with respect to effectiveness (in terms of truth discovery capabilities), efficiency and robustness.