no code implementations • 13 Mar 2024 • Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso
In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.
no code implementations • 13 Jul 2023 • Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.
2 code implementations • 27 Oct 2021 • Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni
Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.
no code implementations • 24 May 2021 • Kristijonas Čyras, Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years.
no code implementations • 10 Dec 2020 • Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni
One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models.
no code implementations • 10 Dec 2020 • Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni
Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs).
Explainable Artificial Intelligence (XAI) Text Classification