Search Results for author: Timo Freiesleben

Found 10 papers, 3 papers with code

CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests

no code implementations4 Apr 2024 Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin Wright

Counterfactual explanations elucidate algorithmic decisions by pointing to scenarios that would have led to an alternative, desired outcome.

counterfactual

Artificial Neural Nets and the Representation of Human Concepts

no code implementations8 Dec 2023 Timo Freiesleben

Some go even further and believe that these concepts are stored in individual units of the network.

Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research

no code implementations7 Jun 2023 Timo Freiesleben, Gunnar König

Despite progress in the field, significant parts of current XAI research are still not on solid conceptual, ethical, or methodological grounds.

Explainable Artificial Intelligence (XAI) Misconceptions

Improvement-Focused Causal Recourse (ICR)

1 code implementation27 Oct 2022 Gunnar König, Timo Freiesleben, Moritz Grosse-Wentrup

We demonstrate that given correct causal knowledge, ICR, in contrast to existing approaches, guides towards both acceptance and improvement.

Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena

no code implementations11 Jun 2022 Timo Freiesleben, Gunnar König, Christoph Molnar, Alvaro Tejero-Cantero

These descriptors are IML methods that provide insight not just into the model, but also into the properties of the phenomenon the model is designed to represent.

BIG-bench Machine Learning Interpretable Machine Learning

A Causal Perspective on Meaningful and Robust Algorithmic Recourse

no code implementations16 Jul 2021 Gunnar König, Timo Freiesleben, Moritz Grosse-Wentrup

Thus, an action that changes the prediction in the desired way may not lead to an improvement of the underlying target.

Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

1 code implementation15 Jun 2021 Gunnar König, Timo Freiesleben, Bernd Bischl, Giuseppe Casalicchio, Moritz Grosse-Wentrup

Direct importance provides causal insight into the model's mechanism, yet it fails to expose the leakage of information from associated but not directly used variables.

Feature Importance

The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples

no code implementations11 Sep 2020 Timo Freiesleben

The same method that creates adversarial examples (AEs) to fool image-classifiers can be used to generate counterfactual explanations (CEs) that explain algorithmic decisions.

counterfactual Relation

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

1 code implementation8 Jul 2020 Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.

BIG-bench Machine Learning Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.