no code implementations • 12 Dec 2023 • Jinqiang Yu, Graham Farr, Alexey Ignatiev, Peter J. Stuckey
A recent alternative is so-called formal feature attribution (FFA), which defines feature importance as the fraction of formal abductive explanations (AXp's) containing the given feature.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 7 Jul 2023 • Jinqiang Yu, Alexey Ignatiev, Peter J. Stuckey
For instance and besides the scalability limitation, the formal approach is unable to tackle the feature attribution problem.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +3
1 code implementation • 27 Jun 2023 • Yacine Izza, Alexey Ignatiev, Peter Stuckey, Joao Marques-Silva
Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 12 Dec 2022 • Yacine Izza, Xuanxiang Huang, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness.
1 code implementation • 20 Jun 2022 • Jinqiang Yu, Alexey Ignatiev, Peter J. Stuckey, Nina Narodytska, Joao Marques-Silva
It also means the "why not" explanations may be suspect as the counterexamples they rely on may not be meaningful.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 20 May 2022 • Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct.
1 code implementation • 19 May 2022 • Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
The paper proposes two logic encodings for computing smallest {\delta}-relevant sets for DTs.
no code implementations • 4 Jul 2021 • Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva
Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).
1 code implementation • 2 Jun 2021 • Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT.
no code implementations • 1 Jun 2021 • Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
Recent work proposed $\delta$-relevant inputs (or sets) as a probabilistic explanation for the predictions made by a classifier on a given input.
no code implementations • 1 Jun 2021 • Joao Marques-Silva, Thomas Gerspacher, Martin Cooper, Alexey Ignatiev, Nina Narodytska
This paper describes novel algorithms for the computation of one formal explanation of a (black-box) monotonic classifier.
no code implementations • 14 May 2021 • Alexey Ignatiev, Joao Marques-Silva
Unfortunately, and in clear contrast with the case of DTs, this paper shows that computing explanations for DLs is computationally hard.
1 code implementation • 3 Feb 2021 • Alexey Ignatiev, Edward Lam, Peter J. Stuckey, Joao Marques-Silva
Machine learning (ML) is ubiquitous in modern life.
no code implementations • 1 Jan 2021 • Aditya Aniruddha Shrotri, Nina Narodytska, Alexey Ignatiev, Joao Marques-Silva, Kuldeep S. Meel, Moshe Vardi
Modern machine learning techniques have enjoyed widespread success, but are plagued by lack of transparency in their decision making, which has led to the emergence of the field of explainable AI.
no code implementations • 1 Jan 2021 • Alexey Ignatiev, Nina Narodytska, Nicholas Asher, Joao Marques-Silva
Explanations of Machine Learning (ML) models often address a ‘Why?’ question.
no code implementations • 21 Dec 2020 • Alexey Ignatiev, Nina Narodytska, Nicholas Asher, Joao Marques-Silva
and 'Why Not?'
no code implementations • 21 Oct 2020 • Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models.
no code implementations • 19 Oct 2020 • Jinqiang Yu, Alexey Ignatiev, Pierre Le Bodic, Peter J. Stuckey
Decision lists are one of the most easily explainable machine learning models.
no code implementations • NeurIPS 2020 • Joao Marques-Silva, Thomas Gerspacher, Martin C. Cooper, Alexey Ignatiev, Nina Narodytska
In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers.
no code implementations • 29 Jul 2020 • Jinqiang Yu, Alexey Ignatiev, Peter J. Stuckey, Pierre Le Bodic
Earlier work on generating optimal decision sets first minimizes the number of rules, and then minimizes the number of literals, but the resulting rules can often be very large.
1 code implementation • NeurIPS 2019 • Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva
The importance of explanations (XP's) of machine learning (ML) model predictions and of adversarial examples (AE's) cannot be overstated, with both arguably being essential for the practical success of ML in different settings.
1 code implementation • 4 Jul 2019 • Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva
Recent years have witnessed a fast-growing interest in computing explanations for Machine Learning (ML) models predictions.
1 code implementation • 26 Nov 2018 • Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva
The experimental results, obtained on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.
no code implementations • 13 Mar 2018 • Alexander Semenov, Oleg Zaikin, Ilya Otpuschennikov, Stepan Kochemazov, Alexey Ignatiev
Propositional satisfiability (SAT) is at the nucleus of state-of-the-art approaches to a variety of computationally hard problems, one of which is cryptanalysis.
no code implementations • 27 Apr 2016 • Alexey Ignatiev, Antonio Morgado, Joao Marques-Silva
Propositional abduction is a restriction of abduction to the propositional domain, and complexity-wise is in the second level of the polynomial hierarchy.