Search Results for author: Salvatore Ruggieri

Found 20 papers, 12 papers with code

Bias Discovery within Human Raters: A Case Study of the Jigsaw Dataset

1 code implementation NLPerspectives (LREC) 2022 Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri

Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.

Causal Perception

no code implementations24 Jan 2024 Jose M. Alvarez, Salvatore Ruggieri

In this work, we formalize perception under causal reasoning to capture the act of interpretation by an individual.

Decision Making Fairness

Deep Neural Network Benchmarks for Selective Classification

1 code implementation23 Jan 2024 Andrea Pugnana, Lorenzo Perini, Jesse Davis, Salvatore Ruggieri

The selective classification framework aims to design a mechanism that balances the fraction of rejected predictions (i. e., the proportion of examples for which the model does not make a prediction) versus the improvement in predictive performance on the selected predictions.

Benchmarking Classification

Correlation and Unintended Biases on Univariate and Multivariate Decision Trees

1 code implementation4 Dec 2023 Mattia Setzu, Salvatore Ruggieri

Decision Trees are accessible, interpretable, and well-performing classification models.

Causal Fairness-Guided Dataset Reweighting using Neural Networks

no code implementations17 Nov 2023 Xuan Zhao, Klaus Broelemann, Salvatore Ruggieri, Gjergji Kasneci

The two neural networks can approximate the causal model of the data, and the causal model of interventions.

Fairness

Ensemble of Counterfactual Explainers

1 code implementation29 Aug 2023 Riccardo Guidotti, Salvatore Ruggieri

In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative power.

counterfactual Explainable artificial intelligence +1

The Initial Screening Order Problem

no code implementations28 Jul 2023 Jose M. Alvarez, Antonio Mastropietro, Salvatore Ruggieri

To study the impact of ISO, we introduce a human-like screener and compare to its algorithmic counterpart.

Decision Making Fairness +1

Reason to explain: Interactive contrastive explanations (REASONX)

1 code implementation29 May 2023 Laura State, Salvatore Ruggieri, Franco Turini

REASONX provides interactive contrastive explanations that can be augmented by background knowledge, and allows to operate under a setting of under-specified information, leading to increased flexibility in the provided explanations.

Beyond Demographic Parity: Redefining Equal Treatment

no code implementations14 Mar 2023 Carlos Mougan, Laura State, Antonio Ferrara, Salvatore Ruggieri, Steffen Staab

Liberalism-oriented political philosophy reasons that all individuals should be treated equally independently of their protected characteristics.

Fairness Philosophy

Domain Adaptive Decision Trees: Implications for Accuracy and Fairness

1 code implementation27 Feb 2023 Jose M. Alvarez, Kristen M. Scott, Salvatore Ruggieri, Bettina Berendt

In uses of pre-trained machine learning models, it is a known issue that the target population in which the model is being deployed may not have been reflected in the source population with which the model was trained.

Domain Adaptation Fairness

Counterfactual Situation Testing: Uncovering Discrimination under Fairness given the Difference

1 code implementation23 Feb 2023 Jose M. Alvarez, Salvatore Ruggieri

For any complainant, we find and compare similar protected and non-protected instances in the dataset used by the classifier to construct a control and test group, where a difference between the decision outcomes of the two groups implies potential individual discrimination.

Attribute counterfactual +2

AUC-based Selective Classification

1 code implementation19 Oct 2022 Andrea Pugnana, Salvatore Ruggieri

We propose a model-agnostic approach to associate a selection function to a given probabilistic binary classifier.

Classification

Fairness Implications of Encoding Protected Categorical Attributes

2 code implementations27 Jan 2022 Carlos Mougan, Jose M. Alvarez, Salvatore Ruggieri, Steffen Staab

We investigate the interaction between categorical encodings and target encoding regularization methods that reduce unfairness.

Fairness Feature Engineering

Estimating the Total Volume of Queries to a Search Engine

1 code implementation24 Jan 2021 Fabrizio Lillo, Salvatore Ruggieri

The observed volumes of sample queries are collected from Google Trends (continuous data) and SearchVolume (binned data).

On The Stability of Interpretable Models

no code implementations22 Oct 2018 Riccardo Guidotti, Salvatore Ruggieri

Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent.

Classification feature selection +2

Open the Black Box Data-Driven Explanation of Black Box Decision Systems

no code implementations26 Jun 2018 Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Salvatore Ruggieri, Franco Turini

We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.

Decision Making

Local Rule-Based Explanations of Black Box Decision Systems

1 code implementation28 May 2018 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti

Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.

counterfactual

A Survey Of Methods For Explaining Black Box Models

no code implementations6 Feb 2018 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti

The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation.

General Classification

Enumerating Distinct Decision Trees

no code implementations ICML 2017 Salvatore Ruggieri

The search space for the feature selection problem in decision tree learning is the lattice of subsets of the available features.

feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.