Search Results for author: Anna Monreale

Found 9 papers, 2 papers with code

A Bag of Receptive Fields for Time Series Extrinsic Predictions

no code implementations29 Nov 2023 Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni

High-dimensional time series data poses challenges due to its dynamic nature, varying lengths, and presence of missing values.

regression Time Series +1

Explainable Authorship Identification in Cultural Heritage Applications: Analysis of a New Perspective

no code implementations3 Nov 2023 Mattia Setzu, Silvia Corbara, Anna Monreale, Alejandro Moreo, Fabrizio Sebastiani

While a substantial amount of work has recently been devoted to enhance the performance of computational Authorship Identification (AId) systems, little to no attention has been paid to endowing AId systems with the ability to explain the reasons behind their predictions.

Authorship Attribution Authorship Verification +3

MulBot: Unsupervised Bot Detection Based on Multivariate Time Series

no code implementations21 Sep 2022 Lorenzo Mannocci, Stefano Cresci, Anna Monreale, Athina Vakali, Maurizio Tesconi

Not only does MulBot achieve excellent results in the binary classification task, but we also demonstrate its strengths in a novel and practically-relevant task: detecting and separating different botnets.

Binary Classification Multi-class Classification +2

Human Response to an AI-Based Decision Support System: A User Study on the Effects of Accuracy and Bias

no code implementations24 Mar 2022 David Solans, Andrea Beretta, Manuel Portela, Carlos Castillo, Anna Monreale

We observe that this setting elicits mostly rational behavior from participants, who place a moderate amount of trust in the DSS and show neither algorithmic aversion (under-reliance) nor automation bias (over-reliance). However, their stated willingness to accept the DSS in the exit survey seems less sensitive to the accuracy of the DSS than their behavior, suggesting that users are only partially aware of the (lack of) accuracy of the DSS.

GLocalX -- From Local to Global Explanations of Black Box AI Models

1 code implementation19 Jan 2021 Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti

Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other.

Decision Making

Open the Black Box Data-Driven Explanation of Black Box Decision Systems

no code implementations26 Jun 2018 Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Salvatore Ruggieri, Franco Turini

We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.

Decision Making

Local Rule-Based Explanations of Black Box Decision Systems

1 code implementation28 May 2018 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti

Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.

counterfactual

A Survey Of Methods For Explaining Black Box Models

no code implementations6 Feb 2018 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti

The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.