no code implementations • 4 Apr 2024 • Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin Wright
Counterfactual explanations elucidate algorithmic decisions by pointing to scenarios that would have led to an alternative, desired outcome.
1 code implementation • 4 Oct 2023 • Julia Herbinger, Susanne Dandl, Fiona K. Ewald, Sofia Loibl, Giuseppe Casalicchio
Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation.
1 code implementation • 24 Jul 2023 • Ludwig Bothmann, Susanne Dandl, Michael Schomaker
A decision can be defined as fair if equal individuals are treated equally and unequals unequally.
no code implementations • 4 May 2023 • Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann
This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.
no code implementations • 13 Apr 2023 • Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe Casalicchio
Counterfactual explanation methods provide information on how feature values of individual observations must be changed to obtain a desired prediction.
no code implementations • 6 Oct 2022 • Susanne Dandl, Andreas Bender, Torsten Hothorn
Most importantly, the noncollapsibility issue necessitates the joint estimation of treatment and prognostic effects.
2 code implementations • 21 Jun 2022 • Susanne Dandl, Torsten Hothorn, Heidi Seibold, Erik Sverdrup, Stefan Wager, Achim Zeileis
A related approach, called "model-based forests", that is geared towards randomized trials and simultaneously captures effects of both prognostic and predictive variables, was introduced by Seibold, Zeileis and Hothorn (2018) along with a modular implementation in the R package model4you.
1 code implementation • 8 Jul 2020 • Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.
1 code implementation • 23 Apr 2020 • Susanne Dandl, Christoph Molnar, Martin Binder, Bernd Bischl
We show the usefulness of MOC in concrete cases and compare our approach with state-of-the-art methods for counterfactual explanations.