Paper

Revisiting Explicit Regularization in Neural Networks for Well-Calibrated Predictive Uncertainty

From the statistical learning perspective, complexity control via explicit regularization is a necessity for improving the generalization of over-parameterized models. However, the impressive generalization performance of neural networks with only implicit regularization may be at odds with this conventional wisdom. In this work, we revisit the importance of explicit regularization for obtaining well-calibrated predictive uncertainty. Specifically, we introduce a probabilistic measure of calibration performance, which is lower bounded by the log-likelihood. We then explore explicit regularization techniques for improving the log-likelihood on unseen samples, which provides well-calibrated predictive uncertainty. Our findings present a new direction to improve the predictive probability quality of deterministic neural networks, which can be an efficient and scalable alternative to Bayesian neural networks and ensemble methods.

Results in Papers With Code
(↓ scroll down to see all results)