On the stability, correctness and plausibility of visual explanation methods based on feature importance

In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess the quality of explanation methods w.r.t. a set of desired properties. In this work, we study the articulation between the stability, correctness and plausibility of explanations based on feature importance for image classifiers. We show that the existing metrics for evaluating these properties do not always agree, raising the issue of what constitutes a good evaluation metric for explanations. Finally, in the particular case of stability and correctness, we show the possible limitations of some evaluation metrics and propose new ones that take into account the local behaviour of the model under test.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here