Distribution-Driven Disjoint Prediction Intervals for Deep Learning

29 Sep 2021  ·  Jaehak Cho, Jae Myung Kim, Sungyeob Han, Jungwoo Lee ·

This paper redefines prediction intervals (PIs) as the form of a union of disjoint intervals. PIs represent predictive uncertainty in the regression problem. Since previous PI methods assumed a single continuous PI (one lower and upper bound), it suffers from performance degradation in the uncertainty estimation when the conditional density function has multiple modes. This paper demonstrates that multimodality should be considered in regression uncertainty estimation. To address the issue, we propose a novel method that generates a union of disjoint PIs. Throughout UCI benchmark experiments, our method improves over current state-of-the-art uncertainty quantification methods, reducing an average PI width by over 27$\%$. Through qualitative experiments, we visualized that the multi-mode often exists in real-world datasets and why our method produces high-quality PIs compared to the previous PI.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here