Epistemic Uncertainty Quantification in Deep Learning Classification by the Delta Method

2 Dec 2019  ·  Geir K. Nilsen, Antonella Z. Munthe-Kaas, Hans J. Skaug, Morten Brun ·

The Delta method is a classical procedure for quantifying epistemic uncertainty in statistical models, but its direct application to deep neural networks is prevented by the large number of parameters $P$. We propose a low cost variant of the Delta method applicable to $L_2$-regularized deep neural networks based on the top $K$ eigenpairs of the Fisher information matrix. We address efficient computation of full-rank approximate eigendecompositions in terms of either the exact inverse Hessian, the inverse outer-products of gradients approximation or the so-called Sandwich estimator. Moreover, we provide a bound on the approximation error for the uncertainty of the predictive class probabilities. We observe that when the smallest eigenvalue of the Fisher information matrix is near the $L_2$-regularization rate, the approximation error is close to zero even when $K\ll P$. A demonstration of the methodology is presented using a TensorFlow implementation, and we show that meaningful rankings of images based on predictive uncertainty can be obtained for two LeNet-based neural networks using the MNIST and CIFAR-10 datasets. Further, we observe that false positives have on average a higher predictive epistemic uncertainty than true positives. This suggests that there is supplementing information in the uncertainty measure not captured by the classification alone.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here