A copula-based visualization technique for a neural network

27 Mar 2020  ·  Yusuke Kubo, Yuto Komori, Toyonobu Okuyama, Hiroshi Tokieda ·

Interpretability of machine learning is defined as the extent to which humans can comprehend the reason of a decision. However, a neural network is not considered interpretable due to the ambiguity in its decision-making process. Therefore, in this study, we propose a new algorithm that reveals which feature values the trained neural network considers important and which paths are mainly traced in the process of decision-making. In the proposed algorithm, the score estimated by the correlation coefficients between the neural network layers that can be calculated by applying the concept of a pair copula was defined. We compared the estimated score with the feature importance values of Random Forest, which is sometimes regarded as a highly interpretable algorithm, in the experiment and confirmed that the results were consistent with each other. This algorithm suggests an approach for compressing a neural network and its parameter tuning because the algorithm identifies the paths that contribute to the classification or prediction results.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here