Contextual Prediction Difference Analysis for Explaining Individual Image Classifications

21 Oct 2019  ·  Jindong Gu, Volker Tresp ·

Much effort has been devoted to understanding the decisions of deep neural networks in recent years. A number of model-aware saliency methods were proposed to explain individual classification decisions by creating saliency maps. However, they are not applicable when the parameters and the gradients of the underlying models are unavailable. Recently, model-agnostic methods have also received attention. As one of them, \textit{Prediction Difference Analysis} (PDA), a probabilistic sound methodology, was proposed. In this work, we first show that PDA can suffer from saturated classifiers. The saturation phenomenon of classifiers exists widely in current neural network-based classifiers. To explain the decisions of saturated classifiers better, we further propose Contextual PDA, which runs hundreds of times faster than PDA. The experiments show the superiority of our method by explaining image classifications of the state-of-the-art deep convolutional neural networks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here