1 code implementation • 30 Aug 2023 • Tuwe Löfström, Helena Löfström, Ulf Johansson, Cecilia Sönströd, Rudy Matela
This paper introduces an extension of a feature importance explanation method, Calibrated Explanations (CE), previously only supporting classification, with support for standard regression and probabilistic regression, i. e., the probability that the target is above an arbitrary threshold.
no code implementations • 23 Aug 2023 • Amr AlKhatib, Henrik Boström, Sofiane Ennadir, Ulf Johansson
The results also suggest that the proposed method can produce tight intervals, while providing validity guarantees.
no code implementations • 11 Jun 2023 • Ulf Johansson, Tuwe Löfström, Cecilia Sönströd
In the experiments, we apply Venn-Abers calibration to decision trees, random forests and XGBoost models, showing how both overconfident and underconfident models are corrected.
1 code implementation • 3 May 2023 • Helena Lofstrom, Tuwe Lofstrom, Ulf Johansson, Cecilia Sonstrod
While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability.
no code implementations • 25 Mar 2022 • Helena Löfström, Karl Hammar, Ulf Johansson
In this paper, we have conducted a semi-systematic meta-survey over fifteen literature surveys covering the evaluation of explainability to identify existing criteria usable for comparative evaluations of explanation methods.