Search Results for author: Xuanxiang Huang

Found 11 papers, 2 papers with code

On Correcting SHAP Scores

no code implementations30 Apr 2024 Olivier Letoffe, Xuanxiang Huang, Joao Marques-Silva

Recent work uncovered examples of classifiers for which SHAP scores yield misleading feature attributions.

A Refutation of Shapley Values for Explainability

no code implementations6 Sep 2023 Xuanxiang Huang, Joao Marques-Silva

This earlier work devised a brute-force approach to identify Boolean functions, defined on small numbers of features, and also associated instances, which displayed such inadequacy-revealing issues, and so served as evidence to the inadequacy of Shapley values for rule-based explainability.

Explainability is NOT a Game

no code implementations27 Jun 2023 Joao Marques-Silva, Xuanxiang Huang

Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding complex machine learning (ML) models.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

From Robustness to Explainability and Back Again

no code implementations5 Jun 2023 Xuanxiang Huang, Joao Marques-Silva

In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal explainability offers important guarantees of rigor.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

The Inadequacy of Shapley Values for Explainability

no code implementations16 Feb 2023 Xuanxiang Huang, Joao Marques-Silva

This paper develops a rigorous argument for why the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features for predictions.

Explainable Artificial Intelligence (XAI)

Feature Necessity & Relevancy in ML Classifier Explanations

1 code implementation27 Oct 2022 Xuanxiang Huang, Martin C. Cooper, Antonio Morgado, Jordi Planes, Joao Marques-Silva

Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.

On Deciding Feature Membership in Explanations of SDD & Related Classifiers

no code implementations15 Feb 2022 Xuanxiang Huang, Joao Marques-Silva

In contrast, this paper shows that for a number of families of classifiers, FMP is in NP.

Efficient Explanations for Knowledge Compilation Languages

no code implementations4 Jul 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva

Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).

Negation

On Efficiently Explaining Graph-Based Classifiers

1 code implementation2 Jun 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT.

Cannot find the paper you are looking for? You can Submit a new open access paper.