no code implementations • 24 Jan 2024 • Sofie Goethals, Toon Calders, David Martens
Artificial Intelligence (AI) finds widespread applications across various domains, sparking concerns about fairness in its deployment.
1 code implementation • 29 Sep 2023 • David Martens, Camille Dams, James Hinns, Mark Vergouwen
Data scientists primarily see the value of SHAPstories in communicating explanations to a general audience, with 92% of data scientists indicating that it will contribute to the ease and confidence of nonspecialists in understanding AI predictions.
no code implementations • 24 Jun 2023 • Sofie Goethals, David Martens, Theodoros Evgeniou
Artificial Intelligence (AI) systems are increasingly used in high-stakes domains of our life, increasing the need to explain these decisions and to make sure that they are aligned with how we want the decision to be made.
no code implementations • 10 Jun 2023 • Bjorge Meulemeester, Raphael Mazzine Barbosa de Oliveira, David Martens
Using these importance values, we additionally introduce three chart types to visualize the counterfactual explanations: (a) the Greedy chart, which shows a greedy sequential path for prediction score increase up to predicted class change, (b) the CounterShapley chart, depicting its respective score in a simple and one-dimensional chart, and finally (c) the Constellation chart, which shows all possible combinations of feature changes, and their impact on the model's prediction score.
no code implementations • 17 May 2023 • Raphael Mazzine Barbosa de Oliveira, Sofie Goethals, Dieter Brughmans, David Martens
In eXplainable Artificial Intelligence (XAI), counterfactual explanations are known to give simple, short, and comprehensible justifications for complex model decisions.
no code implementations • 25 Apr 2023 • Dieter Brughmans, Lissa Melis, David Martens
Even though in some contexts multiple possible explanations are beneficial, there are circumstances where diversity amongst counterfactual explanations results in a potential disagreement problem among stakeholders.
no code implementations • 21 Oct 2022 • Sofie Goethals, Kenneth Sörensen, David Martens
Black-box machine learning models are being used in more and more high-stakes domains, which creates a growing need for Explainable AI (XAI).
no code implementations • 12 Nov 2021 • Yanou Ramon, Sandra C. Matz, R. A. Farrokhnia, David Martens
In this paper, we show how Explainable AI (XAI) can help domain experts and data subjects validate, question, and improve models that classify psychological traits from digital footprints.
no code implementations • 9 Jul 2021 • Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki
Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.
no code implementations • 9 Jul 2021 • Raphael Mazzine, David Martens
This benchmarking study and framework can help practitioners in determining which technique and building blocks most suit their context, and can help researchers in the design and evaluation of current and future counterfactual generation algorithms.
3 code implementations • 15 Apr 2021 • Dieter Brughmans, Pieter Leyman, David Martens
We propose four versions of NICE, one without optimization and, three which optimize the explanations for one of the following properties: sparsity, proximity or plausibility.
no code implementations • 16 Apr 2020 • Tom Vermeire, David Martens
In this paper, SEDC is introduced as a model-agnostic instance-level explanation method for image classification to obtain visual counterfactual explanations.
no code implementations • 10 Mar 2020 • Yanou Ramon, David Martens, Theodoros Evgeniou, Stiene Praet
Machine learning models on behavioral and textual data can result in highly accurate prediction models, but are often very difficult to interpret.
3 code implementations • 4 Dec 2019 • Yanou Ramon, David Martens, Foster Provost, Theodoros Evgeniou
This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer (LIME) and Shapley Additive Explanations (SHAP) with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets.
no code implementations • 17 May 2019 • Dorien herremans, David Martens, Kenneth Sörensen
Record companies invest billions of dollars in new talent around the globe each year.
no code implementations • 21 Jul 2016 • Julie Moeyersoms, Brian d'Alessandro, Foster Provost, David Martens
We evaluate these alternatives in terms of explanation "bang for the buck,", i. e., how many examples' inferences are explained for a given number of features listed.