Deep Argumentative Explanations

10 Dec 2020  ·  Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni ·

Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs). We propose a novel framework for obtaining (local) explanations from NNs while providing transparency about their inner workings, and show how to deploy it for various neural architectures and tasks. We refer to our novel explanations collectively as Deep Argumentative eXplanations (DAXs in short), given that they reflect the deep structure of the underlying NNs and that they are defined in terms of notions from computational argumentation, a form of symbolic AI offering useful reasoning abstractions for explanation. We evaluate DAXs empirically showing that they exhibit deep fidelity and low computational cost. We also conduct human experiments indicating that DAXs are comprehensible to humans and align with their judgement, while also being competitive, in terms of user acceptance, with some existing approaches to XAI that also have an argumentative spirit.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here