Towards Interpreting Deep Neural Networks via Understanding Layer Behaviors

25 Sep 2019  ·  JieZhang Cao, Jincheng Li, Xiping Hu, Peilin Zhao, Mingkui Tan ·

Deep neural networks (DNNs) have achieved unprecedented practical success in many applications. However, how to interpret DNNs is still an open problem. In particular, what do hidden layers behave is not clearly understood. In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by ``monitoring" both across-layer and single-layer distribution evolution to some target distribution in the training. Here, the ``across-layer" and ``single-layer" considers the layer behavior \emph{along the depth} and a specific layer \emph{along training epochs}, respectively. Relying on optimal transport theory, we employ the Wasserstein distance ($W$-distance) to measure the divergence between the layer distribution and the target distribution. Theoretically, we prove that i) the $W$-distance of across layers to the target distribution tends to decrease along the depth. ii) the $W$-distance of a specific layer to the target distribution tends to decrease along training iterations. iii) However, a deep layer is not always better than a shallow layer for some samples. Moreover, our results helps to analyze the stability of layer distributions and explains why auxiliary losses helps the training of DNNs. Extensive experiments on real-world datasets justify our theoretical findings.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here