Search Results for author: Klaus Broelemann

Found 15 papers, 3 papers with code

Adversarial Reweighting Guided by Wasserstein Distance for Bias Mitigation

no code implementations21 Nov 2023 Xuan Zhao, Simone Fabbrizzi, Paula Reyero Lobo, Siamak Ghodsi, Klaus Broelemann, Steffen Staab, Gjergji Kasneci

To balance the data distribution between the majority and the minority groups, our approach deemphasizes samples from the majority group.

Fairness

Causal Fairness-Guided Dataset Reweighting using Neural Networks

no code implementations17 Nov 2023 Xuan Zhao, Klaus Broelemann, Salvatore Ruggieri, Gjergji Kasneci

The two neural networks can approximate the causal model of the data, and the causal model of interventions.

Fairness

Counterfactual Explanation for Regression via Disentanglement in Latent Space

no code implementations14 Nov 2023 Xuan Zhao, Klaus Broelemann, Gjergji Kasneci

In this paper, we introduce a novel method to generate CEs for a pre-trained regressor by first disentangling the label-relevant from the label-irrelevant dimensions in the latent space.

counterfactual Counterfactual Explanation +2

Interpretable Distribution-Invariant Fairness Measures for Continuous Scores

no code implementations22 Aug 2023 Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann

Here, we propose a distributionally invariant version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance.

Fairness

Counterfactual Explanation via Search in Gaussian Mixture Distributed Latent Space

no code implementations25 Jul 2023 Xuan Zhao, Klaus Broelemann, Gjergji Kasneci

In this paper, we introduce a new method to generate CEs for a pre-trained binary classifier by first shaping the latent space of an autoencoder to be a mixture of Gaussian distributions.

counterfactual Counterfactual Explanation

Explanation Shift: How Did the Distribution Shift Impact the Model?

no code implementations14 Mar 2023 Carlos Mougan, Klaus Broelemann, David Masip, Gjergji Kasneci, Thanassis Thiropanis, Steffen Staab

Then, state-of-the-art techniques model input data distributions or model prediction distributions and try to understand issues regarding the interactions between learned models and shifting distributions.

Explanation Shift: Detecting distribution shifts on tabular data via the explanation space

no code implementations22 Oct 2022 Carlos Mougan, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, Steffen Staab

We provide a mathematical analysis of different types of distribution shifts as well as synthetic experimental examples.

Dynamic Model Tree for Interpretable Data Stream Learning

1 code implementation30 Mar 2022 Johannes Haug, Klaus Broelemann, Gjergji Kasneci

Dynamic Model Trees are thus a powerful online learning framework that contributes to more lightweight and interpretable machine learning in data streams.

BIG-bench Machine Learning Interpretable Machine Learning

Robust Deep Neural Networks for Heterogeneous Tabular Data

no code implementations29 Sep 2021 Vadim Borisov, Klaus Broelemann, Enkelejda Kasneci, Gjergji. Kasneci

Although deep neural networks (DNNs) constitute the state-of-the-art in many tasks based on image, audio, or text data, their performance on heterogeneous, tabular data is typically inferior to that of decision tree ensembles.

On Counterfactual Explanations under Predictive Multiplicity

no code implementations23 Jun 2020 Martin Pawelczyk, Klaus Broelemann, Gjergji Kasneci

In this work, we derive a general upper bound for the costs of counterfactual explanations under predictive multiplicity.

counterfactual

Learning Model-Agnostic Counterfactual Explanations for Tabular Data

3 code implementations21 Oct 2019 Martin Pawelczyk, Johannes Haug, Klaus Broelemann, Gjergji Kasneci

On one hand, we suggest to complement the catalogue of counterfactual quality measures [1] using a criterion to quantify the degree of difficulty for a certain counterfactual suggestion.

counterfactual

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

no code implementations25 Sep 2018 Klaus Broelemann, Gjergji Kasneci

We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the transparency of the original models.

Combining Restricted Boltzmann Machines with Neural Networks for Latent Truth Discovery

no code implementations27 Jul 2018 Klaus Broelemann, Gjergji Kasneci

Latent truth discovery, LTD for short, refers to the problem of aggregating ltiple claims from various sources in order to estimate the plausibility of atements about entities.

Restricted Boltzmann Machines for Robust and Fast Latent Truth Discovery

no code implementations31 Dec 2017 Klaus Broelemann, Thomas Gottron, Gjergji Kasneci

Despite a multitude of algorithms to address the LTD problem that can be found in literature, only little is known about their overall performance with respect to effectiveness (in terms of truth discovery capabilities), efficiency and robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.