Search Results for author: Joao Marques-Silva

Found 36 papers, 10 papers with code

Locally-Minimal Probabilistic Explanations

1 code implementation19 Dec 2023 Yacine Izza, Kuldeep S. Meel, Joao Marques-Silva

Formal abductive explanations offer crucial guarantees of rigor and so are of interest in high-stakes uses of machine learning (ML).

The Pros and Cons of Adversarial Robustness

no code implementations18 Dec 2023 Yacine Izza, Joao Marques-Silva

The importance of ML model robustness is illustrated for example by the existence of competitions evaluating the progress of robustness tools, namely in the case of neural networks (NNs) but also by efforts towards robustness certification.

Adversarial Robustness

A Refutation of Shapley Values for Explainability

no code implementations6 Sep 2023 Xuanxiang Huang, Joao Marques-Silva

This earlier work devised a brute-force approach to identify Boolean functions, defined on small numbers of features, and also associated instances, which displayed such inadequacy-revealing issues, and so served as evidence to the inadequacy of Shapley values for rule-based explainability.

Delivering Inflated Explanations

1 code implementation27 Jun 2023 Yacine Izza, Alexey Ignatiev, Peter Stuckey, Joao Marques-Silva

Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

On Logic-Based Explainability with Partially Specified Inputs

no code implementations27 Jun 2023 Ramón Béjar, António Morgado, Jordi Planes, Joao Marques-Silva

The paper shows that most of the algorithms proposed in recent years for computing logic-based explanations can be generalized for computing explanations given the partially specified inputs.

Explainability is NOT a Game

no code implementations27 Jun 2023 Joao Marques-Silva, Xuanxiang Huang

Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding complex machine learning (ML) models.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

From Robustness to Explainability and Back Again

no code implementations5 Jun 2023 Xuanxiang Huang, Joao Marques-Silva

In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal explainability offers important guarantees of rigor.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

The Inadequacy of Shapley Values for Explainability

no code implementations16 Feb 2023 Xuanxiang Huang, Joao Marques-Silva

This paper develops a rigorous argument for why the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features for predictions.

Explainable Artificial Intelligence (XAI)

Feature Necessity & Relevancy in ML Classifier Explanations

1 code implementation27 Oct 2022 Xuanxiang Huang, Martin C. Cooper, Antonio Morgado, Jordi Planes, Joao Marques-Silva

Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.

Logic-Based Explainability in Machine Learning

no code implementations24 Oct 2022 Joao Marques-Silva

Most of these efforts have focused on so-called model-agnostic approaches.

On Computing Relevant Features for Explaining NBCs

no code implementations11 Jul 2022 Yacine Izza, Joao Marques-Silva

Despite the progress observed with model-agnostic explainable AI (XAI), it is the case that model-agnostic XAI can produce incorrect explanations.

Explainable Artificial Intelligence (XAI)

On Tackling Explanation Redundancy in Decision Trees

no code implementations20 May 2022 Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct.

Provably Precise, Succinct and Efficient Explanations for Decision Trees

1 code implementation19 May 2022 Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva

The paper proposes two logic encodings for computing smallest {\delta}-relevant sets for DTs.

On Deciding Feature Membership in Explanations of SDD & Related Classifiers

no code implementations15 Feb 2022 Xuanxiang Huang, Joao Marques-Silva

In contrast, this paper shows that for a number of families of classifiers, FMP is in NP.

Efficient Explanations for Knowledge Compilation Languages

no code implementations4 Jul 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva

Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).

Negation

On Efficiently Explaining Graph-Based Classifiers

1 code implementation2 Jun 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT.

Explanations for Monotonic Classifiers

no code implementations1 Jun 2021 Joao Marques-Silva, Thomas Gerspacher, Martin Cooper, Alexey Ignatiev, Nina Narodytska

This paper describes novel algorithms for the computation of one formal explanation of a (black-box) monotonic classifier.

Efficient Explanations With Relevant Sets

no code implementations1 Jun 2021 Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva

Recent work proposed $\delta$-relevant inputs (or sets) as a probabilistic explanation for the predictions made by a classifier on a given input.

On Explaining Random Forests with SAT

no code implementations21 May 2021 Yacine Izza, Joao Marques-Silva

Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers.

SAT-Based Rigorous Explanations for Decision Lists

no code implementations14 May 2021 Alexey Ignatiev, Joao Marques-Silva

Unfortunately, and in clear contrast with the case of DTs, this paper shows that computing explanations for DLs is computationally hard.

MILP for the Multi-objective VM Reassignment Problem

no code implementations18 Mar 2021 Takfarinas Saber, Anthony Ventresque, Joao Marques-Silva, James Thorburn, Liam Murphy

Machine Reassignment is a challenging problem for constraint programming (CP) and mixed-integer linear programming (MILP) approaches, especially given the size of data centres.

Constraint-Driven Explanations of Black-Box ML Models

no code implementations1 Jan 2021 Aditya Aniruddha Shrotri, Nina Narodytska, Alexey Ignatiev, Joao Marques-Silva, Kuldeep S. Meel, Moshe Vardi

Modern machine learning techniques have enjoyed widespread success, but are plagued by lack of transparency in their decision making, which has led to the emergence of the field of explainable AI.

Decision Making

On Relating "Why?" and "Why Not?" Explanations

no code implementations1 Jan 2021 Alexey Ignatiev, Nina Narodytska, Nicholas Asher, Joao Marques-Silva

Explanations of Machine Learning (ML) models often address a ‘Why?’ question.

On Explaining Decision Trees

no code implementations21 Oct 2020 Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models.

Interpretable Machine Learning

Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay

no code implementations NeurIPS 2020 Joao Marques-Silva, Thomas Gerspacher, Martin C. Cooper, Alexey Ignatiev, Nina Narodytska

In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers.

On Relating Explanations and Adversarial Examples

1 code implementation NeurIPS 2019 Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva

The importance of explanations (XP's) of machine learning (ML) model predictions and of adversarial examples (AE's) cannot be overstated, with both arguably being essential for the practical success of ML in different settings.

On Validating, Repairing and Refining Heuristic ML Explanations

1 code implementation4 Jul 2019 Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva

Recent years have witnessed a fast-growing interest in computing explanations for Machine Learning (ML) models predictions.

Abduction-Based Explanations for Machine Learning Models

1 code implementation26 Nov 2018 Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva

The experimental results, obtained on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.

BIG-bench Machine Learning

Propositional Abduction with Implicit Hitting Sets

no code implementations27 Apr 2016 Alexey Ignatiev, Antonio Morgado, Joao Marques-Silva

Propositional abduction is a restriction of abduction to the propositional domain, and complexity-wise is in the second level of the polynomial hierarchy.

SAT-based Preprocessing for MaxSAT (extended version)

no code implementations8 Oct 2013 Anton Belov, Antonio Morgado, Joao Marques-Silva

The key requirement in this setting is that the preprocessing has to be sound, i. e. so that the solution can be reconstructed correctly and efficiently after the execution of a MaxSAT algorithm on the preprocessed instance.

Cannot find the paper you are looking for? You can Submit a new open access paper.