Entropy-based Logic Explanations of Neural Networks

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks have arisen as explainable-by-design methods as they leverage human-understandable symbols (i.e. concepts) to predict class memberships... However, most of these approaches focus on the identification of the most relevant concepts but do not provide concise, formal explanations of how such concepts are leveraged by the classifier to make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic. The method relies on an entropy-based criterion which automatically identifies the most relevant concepts. We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy. read more

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification CUB Entropy-based Logic Explained Network Classification Accuracy 0.9295 # 1
Explanation Accuracy 95.24 # 2
Explanation complexity 3.74 # 4
Explanation extraction time 171.87 # 3
Image Classification CUB Decision Tree Classification Accuracy 0.8162 # 4
Explanation Accuracy 89.36 # 3
Explanation complexity 45.92 # 1
Explanation extraction time 8.1 # 4
Image Classification CUB Bayesian Rule List Classification Accuracy 0.9079 # 3
Explanation Accuracy 96.02 # 1
Explanation complexity 8.87 # 3
Explanation extraction time 264678.29 # 1
Image Classification CUB $\psi$ network Classification Accuracy 0.9192 # 2
Explanation Accuracy 76.1 # 4
Explanation complexity 15.96 # 2
Explanation extraction time 3707.29 # 2
Classification MIMIC-II Bayesian Rule List Classification Accuracy 0.764 # 4
Explanation Accuracy 70.59 # 1
Explanation complexity 57.7 # 2
Explanation extraction time 440.24 # 1
Classification MIMIC-II $\psi$ network Classification Accuracy 0.7719 # 3
Explanation Accuracy 49.51 # 4
Explanation complexity 20.6 # 3
Explanation extraction time 36.68 # 2
Classification MIMIC-II Decision Tree Classification Accuracy 0.7753 # 2
Explanation Accuracy 69.15 # 2
Explanation complexity 66.6 # 1
Explanation extraction time 6 # 4
Classification MIMIC-II Entropy-based Logic Explained Network Classification Accuracy 0.7905 # 1
Explanation Accuracy 66.93 # 3
Explanation complexity 3.5 # 4
Explanation extraction time 23.08 # 3
Classification vDem $\psi$ network Classification Accuracy 0.8977 # 3
Explanation Accuracy 67.08 # 4
Explanation complexity 5.4 # 3
Explanation extraction time 103.78 # 2
Classification vDem Entropy-based Logic Explained Network Classification Accuracy 0.9451 # 1
Explanation Accuracy 89.88 # 2
Explanation complexity 3.1 # 4
Explanation extraction time 59.9 # 3
Classification vDem Bayesian Rule List Classification Accuracy 0.9123 # 2
Explanation Accuracy 91.21 # 1
Explanation complexity 145.7 # 1
Explanation extraction time 22843.21 # 1
Classification vDem Decision Tree Classification Accuracy 0.8561 # 4
Explanation Accuracy 85.45 # 3
Explanation complexity 30.2 # 2
Explanation extraction time 0.49 # 4

Methods


No methods listed for this paper. Add relevant methods here