Search Results for author: Yacine Izza

Found 13 papers, 4 papers with code

Locally-Minimal Probabilistic Explanations

1 code implementation19 Dec 2023 Yacine Izza, Kuldeep S. Meel, Joao Marques-Silva

Formal abductive explanations offer crucial guarantees of rigor and so are of interest in high-stakes uses of machine learning (ML).

The Pros and Cons of Adversarial Robustness

no code implementations18 Dec 2023 Yacine Izza, Joao Marques-Silva

The importance of ML model robustness is illustrated for example by the existence of competitions evaluating the progress of robustness tools, namely in the case of neural networks (NNs) but also by efforts towards robustness certification.

Adversarial Robustness

Axiomatic Aggregations of Abductive Explanations

no code implementations29 Sep 2023 Gagan Biradar, Yacine Izza, Elita Lobo, Vignesh Viswanathan, Yair Zick

We also evaluate them on multiple datasets and show that these explanations are robust to the attacks that fool SHAP and LIME.

Feature Importance valid

Delivering Inflated Explanations

1 code implementation27 Jun 2023 Yacine Izza, Alexey Ignatiev, Peter Stuckey, Joao Marques-Silva

Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

On Computing Relevant Features for Explaining NBCs

no code implementations11 Jul 2022 Yacine Izza, Joao Marques-Silva

Despite the progress observed with model-agnostic explainable AI (XAI), it is the case that model-agnostic XAI can produce incorrect explanations.

Explainable Artificial Intelligence (XAI)

On Tackling Explanation Redundancy in Decision Trees

no code implementations20 May 2022 Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct.

Provably Precise, Succinct and Efficient Explanations for Decision Trees

1 code implementation19 May 2022 Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva

The paper proposes two logic encodings for computing smallest {\delta}-relevant sets for DTs.

Efficient Explanations for Knowledge Compilation Languages

no code implementations4 Jul 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva

Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).

Negation

On Efficiently Explaining Graph-Based Classifiers

1 code implementation2 Jun 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT.

Efficient Explanations With Relevant Sets

no code implementations1 Jun 2021 Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva

Recent work proposed $\delta$-relevant inputs (or sets) as a probabilistic explanation for the predictions made by a classifier on a given input.

On Explaining Random Forests with SAT

no code implementations21 May 2021 Yacine Izza, Joao Marques-Silva

Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers.

On Explaining Decision Trees

no code implementations21 Oct 2020 Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models.

Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.