Search Results for author: Giuseppe Casalicchio

Found 28 papers, 16 papers with code

Effector: A Python package for regional explanations

1 code implementation3 Apr 2024 Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio

Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.

Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration

no code implementations7 Mar 2024 Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio

We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.

Bayesian Optimization Gaussian Processes

Position Paper: Bridging the Gap Between Machine Learning and Sensitivity Analysis

no code implementations20 Dec 2023 Christian A. Scholbeck, Julia Moosbauer, Giuseppe Casalicchio, Hoshin Gupta, Bernd Bischl, Christian Heumann

We argue that interpretations of machine learning (ML) models or the model-building process can bee seen as a form of sensitivity analysis (SA), a general methodology used to explain complex systems in many fields such as environmental modeling, engineering, or economics.

Position

Leveraging Model-based Trees as Interpretable Surrogate Models for Model Distillation

1 code implementation4 Oct 2023 Julia Herbinger, Susanne Dandl, Fiona K. Ewald, Sofia Loibl, Giuseppe Casalicchio

Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation.

fmeffects: An R Package for Forward Marginal Effects

no code implementations3 Oct 2023 Holger Löwe, Christian A. Scholbeck, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio

Forward marginal effects (FMEs) have recently been introduced as a versatile and effective model-agnostic interpretation method.

Decomposing Global Feature Effects Based on Feature Interactions

2 code implementations1 Jun 2023 Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio

We formally introduce generalized additive decomposition of global effects (GADGET), which is a new framework based on recursive partitioning to find interpretable regions in the feature space such that the interaction-related heterogeneity of local feature effects is minimized.

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations

no code implementations4 May 2023 Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann

This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.

counterfactuals: An R Package for Counterfactual Explanation Methods

no code implementations13 Apr 2023 Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe Casalicchio

Counterfactual explanation methods provide information on how feature values of individual observations must be changed to obtain a desired prediction.

counterfactual Counterfactual Explanation

Algorithm-Agnostic Interpretations for Clustering

no code implementations21 Sep 2022 Christian A. Scholbeck, Henri Funk, Giuseppe Casalicchio

The partial dependence for clustering evaluates average changes in cluster assignments for the entire feature space.

Clustering Dimensionality Reduction +1

REPID: Regional Effect Plots with implicit Interaction Detection

1 code implementation15 Feb 2022 Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio

Machine learning models can automatically learn complex relationships, such as non-linear and interaction effects.

BIG-bench Machine Learning Interpretable Machine Learning

Marginal Effects for Non-Linear Prediction Functions

no code implementations21 Jan 2022 Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann

Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value.

Developing Open Source Educational Resources for Machine Learning and Data Science

no code implementations28 Jul 2021 Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl

It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).

BIG-bench Machine Learning

Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

1 code implementation15 Jun 2021 Gunnar König, Timo Freiesleben, Bernd Bischl, Giuseppe Casalicchio, Moritz Grosse-Wentrup

Direct importance provides causal insight into the model's mechanism, yet it fails to expose the leakage of information from associated but not directly used variables.

Feature Importance

Grouped Feature Importance and Combined Features Effect Plot

1 code implementation23 Apr 2021 Quay Au, Julia Herbinger, Clemens Stachl, Bernd Bischl, Giuseppe Casalicchio

However, for researchers and practitioners, it is often equally important to quantify the importance or visualize the effect of feature groups.

BIG-bench Machine Learning Feature Importance +1

Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges

no code implementations19 Oct 2020 Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl

To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences.

BIG-bench Machine Learning Causal Inference +1

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

1 code implementation8 Jul 2020 Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.

BIG-bench Machine Learning Feature Importance

Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach

1 code implementation8 Jun 2020 Christoph Molnar, Gunnar König, Bernd Bischl, Giuseppe Casalicchio

In addition, we apply the conditional subgroups approach to partial dependence plots (PDP), a popular method for describing feature effects that can also suffer from extrapolation when features are dependent and interactions are present in the model.

Feature Importance

Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

2 code implementations8 Apr 2019 Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl

Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models.

BIG-bench Machine Learning Interpretable Machine Learning

Visualizing the Feature Importance for Black Box Models

1 code implementation18 Apr 2018 Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl

Based on local feature importance, we propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations.

Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.