Search Results for author: Susanne Dandl

Found 9 papers, 5 papers with code

CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests

no code implementations4 Apr 2024 Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin Wright

Counterfactual explanations elucidate algorithmic decisions by pointing to scenarios that would have led to an alternative, desired outcome.

counterfactual

Leveraging Model-based Trees as Interpretable Surrogate Models for Model Distillation

1 code implementation4 Oct 2023 Julia Herbinger, Susanne Dandl, Fiona K. Ewald, Sofia Loibl, Giuseppe Casalicchio

Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation.

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

1 code implementation24 Jul 2023 Ludwig Bothmann, Susanne Dandl, Michael Schomaker

A decision can be defined as fair if equal individuals are treated equally and unequals unequally.

Attribute Decision Making

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations

no code implementations4 May 2023 Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann

This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.

counterfactuals: An R Package for Counterfactual Explanation Methods

no code implementations13 Apr 2023 Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe Casalicchio

Counterfactual explanation methods provide information on how feature values of individual observations must be changed to obtain a desired prediction.

counterfactual Counterfactual Explanation

Heterogeneous Treatment Effect Estimation for Observational Data using Model-based Forests

no code implementations6 Oct 2022 Susanne Dandl, Andreas Bender, Torsten Hothorn

Most importantly, the noncollapsibility issue necessitates the joint estimation of treatment and prognostic effects.

What Makes Forest-Based Heterogeneous Treatment Effect Estimators Work?

2 code implementations21 Jun 2022 Susanne Dandl, Torsten Hothorn, Heidi Seibold, Erik Sverdrup, Stefan Wager, Achim Zeileis

A related approach, called "model-based forests", that is geared towards randomized trials and simultaneously captures effects of both prognostic and predictive variables, was introduced by Seibold, Zeileis and Hothorn (2018) along with a modular implementation in the R package model4you.

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

1 code implementation8 Jul 2020 Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.

BIG-bench Machine Learning Feature Importance

Multi-Objective Counterfactual Explanations

1 code implementation23 Apr 2020 Susanne Dandl, Christoph Molnar, Martin Binder, Bernd Bischl

We show the usefulness of MOC in concrete cases and compare our approach with state-of-the-art methods for counterfactual explanations.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.