Search Results for author: Emanuele Albini

Found 6 papers, 1 papers with code

REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values

no code implementations13 Mar 2024 Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso

In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.

Fairness feature selection

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

no code implementations13 Jul 2023 Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni

Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.

counterfactual Counterfactual Explanation +3

Counterfactual Shapley Additive Explanations

2 code implementations27 Oct 2021 Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni

Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.

counterfactual Counterfactual Explanation +2

Argumentative XAI: A Survey

no code implementations24 May 2021 Kristijonas Čyras, Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni

Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years.

Explainable Artificial Intelligence (XAI)

Influence-Driven Explanations for Bayesian Network Classifiers

no code implementations10 Dec 2020 Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni

One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models.

counterfactual Relation

Deep Argumentative Explanations

no code implementations10 Dec 2020 Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni

Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs).

Explainable Artificial Intelligence (XAI) Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.