Search Results for author: David Martens

Found 16 papers, 3 papers with code

Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics

no code implementations24 Jan 2024 Sofie Goethals, Toon Calders, David Martens

Artificial Intelligence (AI) finds widespread applications across various domains, sparking concerns about fairness in its deployment.

Fairness

Tell Me a Story! Narrative-Driven XAI with Large Language Models

1 code implementation29 Sep 2023 David Martens, Camille Dams, James Hinns, Mark Vergouwen

Data scientists primarily see the value of SHAPstories in communicating explanations to a general audience, with 92% of data scientists indicating that it will contribute to the ease and confidence of nonspecialists in understanding AI predictions.

counterfactual Feature Importance +1

Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem

no code implementations24 Jun 2023 Sofie Goethals, David Martens, Theodoros Evgeniou

Artificial Intelligence (AI) systems are increasingly used in high-stakes domains of our life, increasing the need to explain these decisions and to make sure that they are aligned with how we want the decision to be made.

Explainable Artificial Intelligence (XAI)

Calculating and Visualizing Counterfactual Feature Importance Values

no code implementations10 Jun 2023 Bjorge Meulemeester, Raphael Mazzine Barbosa de Oliveira, David Martens

Using these importance values, we additionally introduce three chart types to visualize the counterfactual explanations: (a) the Greedy chart, which shows a greedy sequential path for prediction score increase up to predicted class change, (b) the CounterShapley chart, depicting its respective score in a simple and one-dimensional chart, and finally (c) the Constellation chart, which shows all possible combinations of feature changes, and their impact on the model's prediction score.

counterfactual Counterfactual Explanation +1

Unveiling the Potential of Counterfactuals Explanations in Employability

no code implementations17 May 2023 Raphael Mazzine Barbosa de Oliveira, Sofie Goethals, Dieter Brughmans, David Martens

In eXplainable Artificial Intelligence (XAI), counterfactual explanations are known to give simple, short, and comprehensible justifications for complex model decisions.

counterfactual Explainable artificial intelligence +1

Disagreement amongst counterfactual explanations: How transparency can be deceptive

no code implementations25 Apr 2023 Dieter Brughmans, Lissa Melis, David Martens

Even though in some contexts multiple possible explanations are beneficial, there are circumstances where diversity amongst counterfactual explanations results in a potential disagreement problem among stakeholders.

counterfactual Decision Making +2

The privacy issue of counterfactual explanations: explanation linkage attacks

no code implementations21 Oct 2022 Sofie Goethals, Kenneth Sörensen, David Martens

Black-box machine learning models are being used in more and more high-stakes domains, which creates a growing need for Explainable AI (XAI).

counterfactual Explainable Artificial Intelligence (XAI)

Explainable AI for Psychological Profiling from Digital Footprints: A Case Study of Big Five Personality Predictions from Spending Data

no code implementations12 Nov 2021 Yanou Ramon, Sandra C. Matz, R. A. Farrokhnia, David Martens

In this paper, we show how Explainable AI (XAI) can help domain experts and data subjects validate, question, and improve models that classify psychological traits from digital footprints.

counterfactual Explainable Artificial Intelligence (XAI)

How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

no code implementations9 Jul 2021 Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki

Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.

Decision Making Explainable Artificial Intelligence (XAI)

A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data

no code implementations9 Jul 2021 Raphael Mazzine, David Martens

This benchmarking study and framework can help practitioners in determining which technique and building blocks most suit their context, and can help researchers in the design and evaluation of current and future counterfactual generation algorithms.

Benchmarking counterfactual

NICE: An Algorithm for Nearest Instance Counterfactual Explanations

3 code implementations15 Apr 2021 Dieter Brughmans, Pieter Leyman, David Martens

We propose four versions of NICE, one without optimization and, three which optimize the explanations for one of the following properties: sparsity, proximity or plausibility.

counterfactual

Explainable Image Classification with Evidence Counterfactual

no code implementations16 Apr 2020 Tom Vermeire, David Martens

In this paper, SEDC is introduced as a model-agnostic instance-level explanation method for image classification to obtain visual counterfactual explanations.

Classification counterfactual +4

Metafeatures-based Rule-Extraction for Classifiers on Behavioral and Textual Data

no code implementations10 Mar 2020 Yanou Ramon, David Martens, Theodoros Evgeniou, Stiene Praet

Machine learning models on behavioral and textual data can result in highly accurate prediction models, but are often very difficult to interpret.

Counterfactual Explanation Algorithms for Behavioral and Textual Data

3 code implementations4 Dec 2019 Yanou Ramon, David Martens, Foster Provost, Theodoros Evgeniou

This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer (LIME) and Shapley Additive Explanations (SHAP) with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets.

counterfactual Counterfactual Explanation

Dance Hit Song Prediction

no code implementations17 May 2019 Dorien herremans, David Martens, Kenneth Sörensen

Record companies invest billions of dollars in new talent around the globe each year.

General Classification Position

Explaining Classification Models Built on High-Dimensional Sparse Data

no code implementations21 Jul 2016 Julie Moeyersoms, Brian d'Alessandro, Foster Provost, David Martens

We evaluate these alternatives in terms of explanation "bang for the buck,", i. e., how many examples' inferences are explained for a given number of features listed.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.