Search Results for author: Riccardo Guidotti

Found 18 papers, 9 papers with code

Bias Discovery within Human Raters: A Case Study of the Jigsaw Dataset

1 code implementation NLPerspectives (LREC) 2022 Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri

Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.

A Bag of Receptive Fields for Time Series Extrinsic Predictions

no code implementations29 Nov 2023 Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni

High-dimensional time series data poses challenges due to its dynamic nature, varying lengths, and presence of missing values.

regression Time Series +1

Social Bias Probing: Fairness Benchmarking for Language Models

no code implementations15 Nov 2023 Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein

While the impact of these biases has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, offering a constrained view of the nature of societal biases within language models.

Benchmarking Fairness +1

Ensemble of Counterfactual Explainers

1 code implementation29 Aug 2023 Riccardo Guidotti, Salvatore Ruggieri

In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative power.

counterfactual Explainable artificial intelligence +1

A Protocol for Continual Explanation of SHAP

1 code implementation12 Jun 2023 Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, Davide Bacciu

Continual Learning trains models on a stream of data, with the aim of learning new information without forgetting previous knowledge.

Continual Learning

Boosting Synthetic Data Generation with Effective Nonlinear Causal Discovery

1 code implementation18 Jan 2023 Martina Cinquini, Fosca Giannotti, Riccardo Guidotti

However, typically, the variables of a dataset depend on one another, and these dependencies are not considered in data generation leading to the creation of implausible records.

Causal Discovery Synthetic Data Generation

Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling

no code implementations18 Jan 2023 Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo

We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples.

Causality-Aware Local Interpretable Model-Agnostic Explanations

1 code implementation10 Dec 2022 Martina Cinquini, Riccardo Guidotti

A main drawback of eXplainable Artificial Intelligence (XAI) approaches is the feature independence assumption, hindering the study of potential variable dependencies.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Explainable Deep Image Classifiers for Skin Lesion Diagnosis

no code implementations22 Nov 2021 Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo, Fosca Giannotti

A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems.

Decision Making Explainable artificial intelligence +2

Benchmarking and Survey of Explanation Methods for Black Box Models

1 code implementation25 Feb 2021 Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, Salvatore Rinzivillo

The widespread adoption of black-box models in Artificial Intelligence has enhanced the need for explanation methods to reveal how these obscure models reach specific decisions.

Benchmarking

GLocalX -- From Local to Global Explanations of Black Box AI Models

1 code implementation19 Jan 2021 Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti

Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other.

Decision Making

On The Stability of Interpretable Models

no code implementations22 Oct 2018 Riccardo Guidotti, Salvatore Ruggieri

Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent.

Classification feature selection +2

Open the Black Box Data-Driven Explanation of Black Box Decision Systems

no code implementations26 Jun 2018 Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Salvatore Ruggieri, Franco Turini

We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.

Decision Making

Local Rule-Based Explanations of Black Box Decision Systems

1 code implementation28 May 2018 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti

Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.

counterfactual

A Survey Of Methods For Explaining Black Box Models

no code implementations6 Feb 2018 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti

The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.