1 code implementation • 19 Dec 2023 • Yacine Izza, Kuldeep S. Meel, Joao Marques-Silva
Formal abductive explanations offer crucial guarantees of rigor and so are of interest in high-stakes uses of machine learning (ML).
no code implementations • 18 Dec 2023 • Yacine Izza, Joao Marques-Silva
The importance of ML model robustness is illustrated for example by the existence of competitions evaluating the progress of robustness tools, namely in the case of neural networks (NNs) but also by efforts towards robustness certification.
no code implementations • 29 Sep 2023 • Gagan Biradar, Yacine Izza, Elita Lobo, Vignesh Viswanathan, Yair Zick
We also evaluate them on multiple datasets and show that these explanations are robust to the attacks that fool SHAP and LIME.
1 code implementation • 27 Jun 2023 • Yacine Izza, Alexey Ignatiev, Peter Stuckey, Joao Marques-Silva
Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 12 Dec 2022 • Yacine Izza, Xuanxiang Huang, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness.
no code implementations • 11 Jul 2022 • Yacine Izza, Joao Marques-Silva
Despite the progress observed with model-agnostic explainable AI (XAI), it is the case that model-agnostic XAI can produce incorrect explanations.
no code implementations • 20 May 2022 • Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct.
1 code implementation • 19 May 2022 • Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
The paper proposes two logic encodings for computing smallest {\delta}-relevant sets for DTs.
no code implementations • 4 Jul 2021 • Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva
Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).
1 code implementation • 2 Jun 2021 • Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT.
no code implementations • 1 Jun 2021 • Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
Recent work proposed $\delta$-relevant inputs (or sets) as a probabilistic explanation for the predictions made by a classifier on a given input.
no code implementations • 21 May 2021 • Yacine Izza, Joao Marques-Silva
Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers.
no code implementations • 21 Oct 2020 • Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models.